864
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.

you are viewing a single comment's thread
view the rest of the comments
[-] cerevant@lemmy.world 44 points 1 year ago

There is already a business model for compensating authors: it is called buying the book. If the AI trainers are pirating books, then yeah - sue them.

There are plagiarism and copyright laws to protect the output of these tools: if the output is infringing, then sue them. However, if the output of an AI would not be considered infringing for a human, then it isn’t infringement.

When you sell a book, you don’t get to control how that book is used. You can’t tell me that I can’t quote your book (within fair use restrictions). You can’t tell me that I can’t refer to your book in a blog post. You can’t dictate who may and may not read a book. You can’t tell me that I can’t give a book to a friend. Or an enemy. Or an anarchist.

Folks, this isn’t a new problem, and it doesn’t need new laws.

[-] Dark_Arc@lemmy.world 58 points 1 year ago

It's 100% a new problem. There's established precedent for things costing different amounts depending on their intended use.

For example, buying a consumer copy of song doesn't give you the right to play that song in a stadium or a restaurant.

Training an entire AI to make potentially an infinite number of derived works from your work is 100% worthy of requiring a special agreement. This even goes beyond simple payment to consent; a climate expert might not want their work in an AI which might severely mischatacterize the conclusions, or might want to require that certain queries are regularly checked by a human, etc

[-] bh11235@infosec.pub 2 points 1 year ago* (last edited 1 year ago)

Well, fine, and I can't fault new published material having a "no AI" clause in its term of service. But that doesn't mean we get to dream this clause into being retroactively for all the works ChatGPT was trained on. Even the most reasonable law in the world can't be enforced on someone who broke it 6 months before it was legislated.

Fortunately the "horses out the barn" effect here is maybe not so bad. Imagine the FOMO and user frustration when ToS & legislation catch up and now ChatGPT has no access to the latest books, music, news, research, everything. Just stuff from before authors knew to include the "hands off" clause - basically like the knowledge cutoff, but forever. It's untenable, OpenAI will be forced to cave and pay up.

[-] DandomRude@lemmy.world 12 points 1 year ago

OpenAI and such being forced to pay a share seems far from the worst scenario I can imagine. I think it would be much worse if artists, writers, scientists, open source developers and so on were forced to stop making their works freely available because they don't want their creations to be used by others for commercial purposes. That could really mean that large parts of humanity would be cut off from knowledge.

I can well imagine copyleft gaining importance in this context. But this form of licencing seems pretty worthless to me if you don't have the time or resources to sue for your rights - or even to deal with the various forms of licencing you need to know about to do so.

[-] kklusz@lemmy.world 1 points 1 year ago

I think it would be much worse if artists, writers, scientists, open source developers and so on were forced to stop making their works freely available because they don’t want their creations to be used by others for commercial purposes.

None of them are forced to stop making their works freely available. If they want to voluntarily stop making their works freely available to prevent commercial interests from using them, that’s on them.

Besides, that’s not so bad to me. The rest of us who want to share with humanity will keep sharing with humanity. The worst case imo is that artists, writers, scientists, and open source developers cannot take full advantage of the latest advancements in tech to make more and better art, writing, science, and software. We cannot let humanity’s creative potential be held hostage by anyone.

That could really mean that large parts of humanity would be cut off from knowledge.

On the contrary, AI is making knowledge more accessible than ever before to large parts of humanity. The only comparible other technologies that have done this in recent times are the internet and search engines. Thank goodness the internet enables piracy that allows anyone to download troves of ebooks for free. I look forward to AI doing the same on an even greater scale.

[-] FlyingSquid@lemmy.world 7 points 1 year ago

Shouldn't there be a way to freely share your works without having to expect an AI to train on them and then be able to spit them back out elsewhere without attribution?

load more comments (1 replies)
[-] CmdrShepard@lemmy.one 3 points 1 year ago

The rest of us who want to share with humanity will keep sharing with humanity. The worst case imo is that artists, writers, scientists, and open source developers cannot take full advantage of the latest advancements in tech to make more and better art, writing, science, and software. We cannot let humanity’s creative potential be held hostage by anyone.

You're not talking about sharing it with humanity, you're talking about feeding it into an AI. How is this holding back the creative potential of humanity? Again, you're talking about feeding and training a computer with this material.

load more comments (1 replies)
load more comments (21 replies)
[-] scarabic@lemmy.world 30 points 1 year ago

When you sell a book, you don’t get to control how that book is used.

This is demonstrably wrong. You cannot buy a book, and then go use it to print your own copies for sale. You cannot use it as a script for a commercial movie. You cannot go publish a sequel to it.

Now please just try to tell me that AI training is specifically covered by fair use and satire case law. Spoiler: you can’t.

This is a novel (pun intended) problem space and deserves to be discussed and decided, like everything else. So yeah, your cavalier dismissal is cavalierly dismissed.

[-] Zormat@lemmy.blahaj.zone 10 points 1 year ago

I completely fail to see how it wouldn't be considered transformative work

[-] scarabic@lemmy.world 9 points 1 year ago

It fails the transcendence criterion.Transformative works go beyond the original purpose of their source material to produce a whole new category of thing or benefit that would otherwise not be available.

Taking 1000 fan paintings of Sauron and using them in combination to create 1 new painting of Sauron in no way transcends the original purpose of the source material. The AI painting of Sauron isn’t some new and different thing. It’s an entirely mechanical iteration on its input material. In fact the derived work competes directly with the source material which should show that it’s not transcendent.

We can disagree on this and still agree that it’s debatable and should be decided in court. The person above that I’m responding to just wants to say “bah!” and dismiss the whole thing. If we can litigate the issue right here, a bar I believe this thread has already met, then judges and lawmakers should litigate it in our institutions. After all the potential scale of this far reaching issue is enormous. I think it’s incredibly irresponsible to say feh nothing new here move on.

[-] HumbertTetere@feddit.de 3 points 1 year ago

I do think you have a point here, but I don't agree with the example. If a fan creates the 1001 fan painting after looking at others, that might be quite similar if they miss the artistic quality to express their unique views. And it also competes with their source, yet it's generally accepted.

[-] Phlogiston@lemmy.world 3 points 1 year ago

Being able to dialog with a book, even to the point of asking the AI to "take on the persona of a character in the book" and support ongoing is substantively a transcendent version of the original. That one can, as a small subset of that transformed version, get quotes from the original work feels like a small part of this new work.

If this had been released for a single work. Like, "here is a star wars AI that can take on the persona of star wars characters" and answer questions about the star wars universe etc. I think its more likely that the position I'm taking here would lose the debate. But this is transformative against the entire set of prior material from books, movies, film, debate, art, science, philosophy etc. It merges and combines all of that. I think the sheer scope of this new thing supports the idea that its truly transformative.

A possible compromise would be to tax AI and use the proceeds to fund a UBI initiative. True, we'd get to argue if high profile authors with IP that catches the public's attention should get more than just blogger or a random online contributor -- but the basic path is that AI is trained on and succeeds by standing on the shoulders of all people. So all people should get some benefits.

[-] jecxjo@midwest.social 8 points 1 year ago

Typically the argument has been "a robot can't make transformative works because it's a robot." People think our brains are special when in reality they are just really lossy.

[-] Zormat@lemmy.blahaj.zone 6 points 1 year ago

Even if you buy that premise, the output of the robot is only superficially similar to the work it was trained on, so no copyright infringement there, and the training process itself is done by humans, and it takes some tortured logic to deny the technology's transformative nature

load more comments (12 replies)
[-] Hildegarde@lemmy.world 4 points 1 year ago

Transformativeness is only one of the four fair use factors. Just because something is transformative can't alone make something fair use.

Even if AI is transformative, it would likely fail on the third factor. Fair use requires you to take the minimum amount of the copyrighted work, and AI companies scrape as much data as possible to train their models. Very unlikely to support a finding of fair use.

The final factor is market impact. As generative AIs are built to mimic the creativite outputs of human authorship. By design AI acts as a market replacement for human authorship so it would likely fail on this factor as well.

Regardless, trained AI models are unlikely to be copyrightable. Copyrights require human authorship which is why AI and animal generated art are not copyrightable.

A trained AI model is a piece of software so it should be protectable by patents because it is functional rather than expressive. But a patent requires you to describe how it works, so you can't do that with AI. And a trained AI model is self-generated from training data, so there's no human authorship even if trained AI models were copyrightable.

The exact laws that do apply to AI models is unclear. And it will likely be determined by court cases.

[-] cerevant@lemmy.world 7 points 1 year ago

No, you misunderstand. Yes, they can control how the content in the book is used - that’s what copyright is. But they can’t control what I do with the book - I can read it, I can burn it, I can memorize it, I can throw it up on my roof.

My argument is that the is nothing wrong with training an AI with a book - that’s input for the AI, and that is indistinguishable from a human reading it.

Now what the AI does with the content - if it plagiarizes, violates fair use, plagiarizes- that’s a problem, but those problems are already covered by copyright laws. They have no more business saying what can or cannot be input into an AI than they can restrict what I can read (and learn from). They can absolutely enforce their copyright on the output of the AI just like they can if I print copies of their book.

My objection is strictly on the input side, and the output is already restricted.

[-] Redtitwhore@lemmy.world 4 points 1 year ago

Makes sense. I would love to hear how anyone can disagree with this. Just because an AI learned or trained from a book doesn't automatically mean it violated any copyrights.

[-] cerevant@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

The base assumption of those with that argument is that an AI is incapable of being original, so it is "stealing" anything it is trained on. The problem with that logic is that's exactly how humans work - everything they say or do is derivative from their experiences. We combine pieces of information from different sources, and connect them in a way that is original - at least from our perspective. And not surprisingly, that's what we've programmed AI to do.

Yes, AI can produce copyright violations. They should be programmed not to. They should cite their sources when appropriate. AI needs to "learn" the same lessons we learned about not copy-pasting Wikipedia into a term paper.

[-] lily33@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

It's specifically distribution of the work or derivatives that copyright prevents.

So you could make an argument that an LLM that's memorized the book and can reproduce (parts of) it upon request is infringing. But one that's merely trained on the book, but hasn't memorized it, should be fine.

load more comments (2 replies)
[-] volkhavaar@lemmy.world 16 points 1 year ago

This is a little off, when you quote a book you put the name of the book you’re quoting. When you refer to a book, you, um, refer to the book?

I think the gist of these authors complaints is that a sort of “technology laundered plagiarism” is occurring.

load more comments (1 replies)
[-] cloudless@feddit.uk 15 points 1 year ago

I asked Bing Chat for the 10th paragraph of the first Harry Potter book, and it gave me this:

"He couldn’t know that at this very moment, people meeting in secret all over the country were holding up their glasses and saying in hushed voices: ‘To Harry Potter – the boy who lived!’"

It looks like technically I might be able to obtain the entire book (eventually) by asking Bing the right questions?

[-] cerevant@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

Then this is a copyright violation - it violates any standard for such, and the AI should be altered to account for that.

What I’m seeing is people complaining about content being fed into AI, and I can’t see why that should be a problem (assuming it was legally acquired or publicly available). Only the output can be problematic.

[-] GentlemanLoser@reddthat.com 5 points 1 year ago

No, the AI should be shut down and the owner should first be paying the statutory damages for each use of registered works of copyright (assuming all parties in the USA)

If they have a company left after that, then they can fix the AI.

[-] cerevant@lemmy.world 8 points 1 year ago

Again, my point is that the output is what can violate the law, not the input. And we already have laws that govern fair use, rebroadcast, etc.

[-] DandomRude@lemmy.world 4 points 1 year ago

I think it's not just the output. I can buy an image on any stock Plattform, print it on a T-Shirt, wear it myself or gift it to somebody. But if I want to sell T-Shirts using that image I need a commercial licence - even if I alter the original image extensivly or combine it with other assets to create something new. It's not exactly the same thing but openAI and other companies certainly use copyrighted material to create and improve commercial products. So this doesn't seem the same kind of usage an avarage joe buys a book for.

[-] assassin_aragorn@lemmy.world 9 points 1 year ago

However, if the output of an AI would not be considered infringing for a human, then it isn’t infringement.

It's an algorithm that's been trained on numerous pieces of media by a company looking to make money of it. I see no reason to give them a pass on fairly paying for that media.

You can see this if you reverse the comparison, and consider what a human would do to accomplish the task in a professional setting. That's all an algorithm is. An execution of programmed tasks.

If I gave a worker a pirated link to several books and scientific papers in the field, and asked them to synthesize an overview/summary of what they read and publish it, I'd get my ass sued. I have to buy the books and the scientific papers. STEM companies regularly pay for access to papers and codes and standards. Why shouldn't an AI have to do the same?

[-] bouncing@partizle.com 10 points 1 year ago

If I gave a worker a pirated link to several books and scientific papers in the field, and asked them to synthesize an overview/summary of what they read and publish it, I’d get my ass sued. I have to buy the books and the scientific papers.

Well, if OpenAI knowingly used pirated work, that's one thing. It seems pretty unlikely and certainly hasn't been proven anywhere.

Of course, they could have done so unknowingly. For example, if John C Pirate published the transcripts of every movie since 1980 on his website, and OpenAI merely crawled his website (in the same way Google does), it's hard to make the case that they're really at fault any more than Google would be.

[-] cactusupyourbutt@lemmy.world 2 points 1 year ago

well no, because the summary is its own copyrighted work

[-] bouncing@partizle.com 2 points 1 year ago* (last edited 1 year ago)

The published summary is open to fair use by web crawlers. That was settled in Perfect 10 v Amazon.

load more comments (1 replies)
[-] assassin_aragorn@lemmy.world 1 points 1 year ago

Haven't people asked it to reproduce specific chapters or pages of specific books and it's gotten it right?

load more comments (2 replies)
[-] Saik0Shinigami@lemmy.saik0.com 1 points 1 year ago

It’s an algorithm that’s been trained on numerous pieces of media by a company looking to make money of it.

If I read your book... and get an amazing idea... Turn it into a business and make billions off of it. You still have no right to anything. This is no different.

If I gave a worker a pirated link to several books and scientific papers in the field

There's been no proof or evidence provided that ANY content was ever pirated. Has any of the companies even provided the dataset they've used yet?

Why is this the presumption that they did it the illegal way?

[-] CmdrShepard@lemmy.one 3 points 1 year ago

If I read your book... and get an amazing idea... Turn it into a business and make billions off of it. You still have no right to anything. This is no different

I don't see how this is even remotely the same? These companies are using this material to create their commercial product. They're not consuming it personally and developing a random idea later, far removed from the book itself.

I can't just buy (or pirate) a stack of Blu-rays and then go start my own Netflix, which is akin to what is happening here.

load more comments (6 replies)
[-] bouncing@partizle.com 8 points 1 year ago

There is already a business model for compensating authors: it is called buying the book. If the AI trainers are pirating books, then yeah - sue them.

That's part of the allegation, but it's unsubstantiated. It isn't entirely coherent.

[-] FlyingSquid@lemmy.world 3 points 1 year ago

It's not entirely unsubstantiated. Sarah Silverman was able to get ChatGPT to regurgitate passages of her book back to her.

[-] bouncing@partizle.com 3 points 1 year ago

Her lawsuit doesn't say that. It says,

when ChatGPT is prompted, ChatGPT generates summaries of Plaintiffs’ copyrighted works—something only possible if ChatGPT was trained on Plaintiffs’ copyrighted works

That's an absurd claim. ChatGPT has surely read hundreds, perhaps thousands of reviews of her book. It can summarize it just like I can summarize Othello, even though I've never seen the play.

[-] AnonStoleMyPants@sopuli.xyz 2 points 1 year ago

I don't know if this holds water though. You don't need to trail the AI on the book itself to get that result. Just on discussions about the book which for sure include passages on the book.

this post was submitted on 26 Jul 2023
864 points (96.5% liked)

Technology

59179 readers
2176 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS