Sure but that isn't what was said. They said when teachers must have a license to use anything when they are "training human students". That is just plain wrong.It is a bit more nuanced than that.
If the material being copied is strictly for in-class use and is pure research, then it is almost certainly fair use.
But if the material being copied is for public use by the class (e.g., a play or song), then it is not fair use.
And if the material being copied is from an existing text book, then it is not fair use.
As was your response.Sure but that isn't what was said. They said when teachers must have a license to use anything when they are "training human students". That is just plain wrong.
Another distinct possibility is they're paid shills who are prohibited from discussing the case publicly because they're agents of the company in reality, even if not openly so. I don't necessarily think so but it'd also fit the facts so far.Weird how the two or three shills for the AI industry that regularly post comments about how the latest LLM just released today is already saving them so much time, and will definitely be the breakthrough that will prove all the doubters wrong, never post on stories about the copyright aspect. Either (a) they don't have a good counter argument or (b) they get AIs to write all their comments for them, and those AIs have been hardcoded not to respond to questions about copyright lawsuits.
Hell, Clippy would be better.Yeah, but that's not really an argument in favor of AI. We would be better off with a circa-2002 super-basic chatbot running the USA at this point.
I don't think AI is going to work well enough to do anything to the labor market, but it does provide a way to steal lots of intellectual property, and unfortunately I think SCOTUS is eventually going to back this massive theft.This is never going to happen. There is a trillion dollars invested in this stuff and Trump and Congress are going to find a way to make it legal and allow AI to fuck over every content creator, writer, artist, and so on. We are solidly in the command and control market economy now and nobody is going to allow 10,000 points to get wiped off the Dow. The billionaires are going to get their money.
The basic economic theory from the right is pretty close to wipe out all labor, go to a full asset economy, make money off of crypto, meme stocks, and various scams, turn Goldman Sachs into a rack of computers. We can always have prisoners pick our crops until we invent robots to do it - prison slavery is still legal in the US after all.
Right, because courts are the exclusive arbiter of the linguistic meaning of theft. You can absolutely call someone a thief for stealing your idea to wear a blue dress to prom.Edit: Ok, the quote is blocked which is fine. Regardless, saying this is theft whether by "AI" companies or individuals is such bullshit. Copyright infringement has been explicitly stated by SCOTUS and multiple other federal courts to not be equivalent to theft.
Where did you get the idea "that not a single user was sued that downloaded CR items without DRM"?So many are comparing napster and individuals that downloaded. Wrong comparison.
Napster SOLD/gave away the music. Most individuals that were sued, was because they were offering up videos/music to others. Im not certain, but I believe that not a single user was sued that downloaded CR items without DRM, but did not provide it to others. Lots of issues about fair use with this last one, but again, I do not believe that ppl were sued for that.
AI downloaded it, but does not provide it to others. It is only used by the AI.
I believe that this is fair use.
If that is not the case, then China, Russia, and others will start jumping for joy.
Or, like early autonomous driving results, maybe this is just as good as it's ever going to be. It'll get stripped down and simplified and used for things like managing telephone "help" labyrinths and replace the robovoiced hard-wired mazes used now.
It is morally wrong to steal the creative work of millions of people to feed your industrial-creation machine in order to replace those people. The valuations of these companies are clearly based on the belief they will replace millions of workers and take a % of their salaries. Stealing their work without pay in order to replace them fucking sucks.
I would argue that making jokes about morals not existing is bad for society.Hah, "morals"! Good one! What are you, 200 years old?
All these shareholders in AI companies need to ask themselves, why can't the AI generate its own content by now? A 'thinking machine' that has to very expensively webcrawl and summarize the world's content repeatedly and still can't actually think for itself?
What kind of 'generative AI' can't generate its own content? Generative AI is a smoothie blender, not a farm. You have to feed it as much as it feeds you.
It's a tech demo and a mechanical turk, not a thinking machine. The economics don't even work.
And I would reply that the disintegration of the concept of morality is what actually harms society, and observing this disintegration -- with humor or without -- is necessary if that decline is ever to be reversed.I would argue that making jokes about morals not existing is bad for society.
Where did you get the idea "that not a single user was sued that downloaded CR items without DRM"?
Did you choose to not read or include the IMPORTANT part in this?
I believe that not a single user was sued that downloaded CR items without DRM, but did not provide it to others
Disney spent 18 months negotiating to create a digital version of Dwayne Johnson for the live-action Moana film. Johnson agreed. The technology was ready. Then Disney’s lawyers killed the whole thing—not because of privacy concerns or actor rights, but because they worried parts of the film might end up in the public domain.
This story is incredibly one-sided.
Did you even reach out to the plaintiffs at all?
Edit: no, seriously. The story cites Anthropic, then it cites a bunch of industry groups that back Anthropic. It doesn't cite the plaintiffs.
Nova Mob members, friends, and guests borged into Meta’s AI
Roll a dice to choose the next word to build a sentence. Keep doing that 50 times to build a paragraph or page. What are the chances that you will accurately reproduce a section of a Harry Potter novel? About 98%, if you are one particular AI model.
But before naming that Artificial Intelligence model, and which novels are uncannily reproduced with no money going back to the writer, how do books get into the AI training set in the first place? If you are Meta, you use a database of pirated books and hoover it all up in its entirety, according to The Atlantic. Just like the Borg on Star Trek.
Turns out almost all the Nova Mob’s published members, friends, and our guests, are part of the borged data set that Meta ate for its training set.
Did LibGen have permission to reproduce the books of these writers?
Did Meta have permission to borg them up into its maw, to train its AI with?
Search for yourself:
Search LibGen, the Pirated-Books Database That Meta Used to Train AI
https://www.theatlantic.com/technology/archive/2025/03/search-libgen-data-set/682094/
“Millions of books and scientific papers are captured in the LibGen collection’s current iteration.” Including novels, stories, and non-fiction by all these people, I’ve checked:
Eugen Bacon, Max Barry, John Birmingham, Jenny Blackford, Russell Blackford, Sue Bursztynski,
James Cambias, Trudi Canavan, Paul Collins
Jack Dann, Chris Flynn
Rob Gerrand, Kerry Greenwood
Lee Harding, Richard Harland, Robert Hood
Van Ikin, George Ivanoff
Paul Kincaid
Vanessa Len, Ken Liu
Sophie Masson, Bren MacDibble, Iain McIntyre, Sean McMullen, Andrew MacRae, Farah Mendlesohn, Meg Mundell
Shelley Parker-Chan, Hoa Pham, Gillian Polack
Jane Routley, Lucy Sussex
Shaun Tan, Keith Taylor
Kaaron Warren, Janeen Webb
Okay, let me rephrase my question. Where did you read that the group of people you are claiming did not get sued, did not get sued?Did you choose to not read or include the IMPORTANT part in this?
We can only hope!Think about what this ruling would even mean for small scale opensource projects. This would really be the death of all AI in the US
Is it really financial gain if they've been bleeding money since day one with no end in sight? /sIt's basic copyright law.
If you use someone's copyrighted works without permission for financial gain (which clearly they are) or in a way that diminishes the value of the original work (which they almost certainly are), or if you create new works that are derivative of the original work (which they are doing almost by definition), you have violated copyright law.
Fair use doesn't apply here because of the size and scope of the use.
Anthropic is screwed and so they should be.
I wouldn’t. Children are people. AI is not a person. AI is a part of machines, machines which are built and run by corporations and adults who are culpable for their actions.Personally, if AI was a good thing, I’d be happy to cut it the same slack we do children. That is, allow it an educational exemption. It’s not like children don’t copy. What is the saying? Good artists borrow, great artists steal?
But I would have to be convinced AI served the public good, and it is pretty hard to believe that if it is owned by billionaires.
Ultimately, the government may have to nationalize AI labor like it does the broadcast spectrum. It is hard to imagine how it will support UBI otherwise.
It’s not different at all. Both things involve computer data encoding source information. They just happen to involve encoding it in different ways.Quite a bit different here the end product is a neural net.
No, that does not follow in any way, because humans are not machines and exist in nature. Neural nets are artificial, digital constructs that exist entirely in machines as a way to simulate a facsimile of how a brain works.It may have some weights tailored that remember a section of a book but by the same logic so would a person's brain.
It’s not even remotely a stretch. The model is built from the data fed into it. It creates a mass statistical model of all the data fed into it to distill that information down into a smaller form which can then later be decoded by prompting. Once created, the model itself is in effect simply a lossily compressed copy of the training data. Applying compression - lossy or otherwise - does not wash away the copyright. There’s plenty of more conventional lossy compression that uses statistical methods to encode the data. This is not a novel or controversial area, the difference is really just the scale and breadth.I think it would hold water to make sure they legitimate bought a copy/license to read each book but probably a bit of a stretch to say the neural net itself is infringing.
Even if we accept that premise, which is more debatable than your framing would suggest, then it is still incumbent upon them to prove it. If they can’t, then the fact they fed the data into the model at all is all anyone has to go on.There may be entire books that don't adjust the neural net at all and now they are layering sythetic data on top of it might even undo the original adjustment.
I don't think AI is going to work well enough to do anything to the labor market, but it does provide a way to steal lots of intellectual property, and unfortunately I think SCOTUS is eventually going to back this massive theft.
As the Robber Barons demonstrated, there are lots of ways of having a financial gain even when the company is bleeding money.Is it really financial gain if they've been bleeding money since day one with no end in sight? /s