Is anyone actually arguing that people would just altogether stop making art? I have seen my fair share of hobbyist artists get demoralized when, say, someone trains a Stable Diffusion model to deliberately and specifically rip off their style or characters. But of course the people who make art for the love of art will keep doing it anyway.That is not to say that there are no ethical or economic concerns with regards to AI art. But the argument that 'people would stop making art if a computer can do it for you' is very much a strawman argument in this debate, and holds very little water.
Ah yes, much the same way that music is now pointless because someone invented a synthesiser of instruments. Damn flood of mediocrity from those people who can’t even play a piano, or a guitar, or drums, or..One consequence, looking at it from a very high level, is that this technology seems to empower a huge number of people that used to be either not quite talented or not passionate enough, to instantly SURPASS a bunch of other people who ARE all that, and spent decades investing in it. Millions of people who's entire identity was put into this must feel like having the rug pulled out from under them, and for what? In return we get tens of millions of moderately talented hobbyists (at least as long as ChatGPT hasn't taken over prompting completely)flood the world with mediocrity. What is the aggregate effect on Total Human Wellbeing, here?
Stinks that you are being down voted. (This probably will be as well.)Nor is there a carve out in copyright law to prevent using art as training material. I'm constantly amazed by how broad people mistakenly believe copyright protections to be.
The design was the brainchild of David Carson. He has a good book called The End of Print, you can find scans of it online pretty easily if you're interested, I have a hardcopy.Oh. My. God. Ray Gun! I bought the first few issues of that mag, and yeah, the layout and design was really... Interesting. It also had a lot of good and interesting articles covering alternative/indie bands, with no thought to commercialism.
Is anyone actually arguing that people would just altogether stop making art? I have seen my fair share of hobbyist artists get demoralized when, say, someone trains a Stable Diffusion model to deliberately and specifically rip off their style or characters. But of course the people who make art for the love of art will keep doing it anyway.
The problem is not and has never been that human artists will stop existing altogether or whatever. The problem is in how these tools are taking the work of those artists, often without consent, and using them to make commercial tools that will deprive them of further work for the sake of profit.
The key difference between AI art and the automobile is that the latter isn't the result of someone taking the work of thousands of horse-drivers into a blender without permission and using it to create a machine that does the horse-driver's work in their place. People are literally deliberately training AI models on specific artists to deliberately recreate their particular styles. Models are scraping thousands of images without permission or credit. It's not innovation, it's plagiarism and theft. It's practically a meme at this point to see an ersatz Patreon watermark in the corner of AI art because many models are trained on so many stolen pieces.The same problem has come up with the invention of the cotton gin, or the automobile, or the printing press, or the computer, or 3d milling, or robotic assembly.
Cotton pickers, horse carriage drivers, scribists, calculists, machinists and factory workers didn't get to ban these inventions, and I don't think commercial artists will have much traction with outlawing AI art.
Note that all these inventions heralded both economic, social and technological progress as well as ills, but society itself adapted around these changes, some of which were drastic.
Now it would be great if society can come with an equitable way to reimburse its participants, perhaps with some form of Universal Basic Income, but I don't think artists as a class will be the ones to put a brake on society's relentless drive for automation and automated tools (and profits. It's always about the profits).
There are three major parties to this viewpoint; those who are financially incentivized to not understand this, those who are uninformed on copyright law (which is absolutely fine, generally speaking), and those who are virtue signaling.Nor is there a carve out in copyright law to prevent using art as training material. I'm constantly amazed by how broad people mistakenly believe copyright protections to be.
From a technical perspective, it's impossible for an AI to create a derivative work. The process works by taking a training set and breaking it down into a data set that is then used to create a set of instructions for the AI to work from in creating new works. When you see things like ghost watermarks that's a training issue (the data scientist did not provide good enough instructions, i.e. to ignore watermarks). That's correctable, of course, without necessarily needing to start over. There are some interesting instances where an AI will compose something that looks pretty similar to another artist's work. But that's all it is. The way it works is by generating an entirely new media composition from scratch. If it creates "portrait of a golden retriever" that looks materially similar to an artist's, chances are that a) there are a lot of similar portraits, b) there would be enough differences for a jury to find them to be non-derivative works (if that were possible, which it isn't, since there's no copyright to argue over), and/or c) the training set was somehow incredibly limited on dog portraiture.Stinks that you are being down voted. (This probably will be as well.)
https://www.copyright.gov/what-is-copyright/
U.S. copyright law provides copyright owners with the following exclusive rights:
Copyright doesn't mean "I own it and you have to ask me if you want to do anything at all with it." The only thing copyright protects that would possibly apply would be a derivative work. And that would potentially apply only to something the AI creates, not to the act of training.
- Reproduce the work in copies or phonorecords.
- Prepare derivative works based upon the work.
- Distribute copies or phonorecords of the work to the public by sale or other transfer of ownership or by rental, lease, or lending.
- Perform the work publicly if it is a literary, musical, dramatic, or choreographic work; a pantomime; or a motion picture or other audiovisual work.
- Display the work publicly if it is a literary, musical, dramatic, or choreographic work; a pantomime; or a pictorial, graphic, or sculptural work. This right also applies to the individual images of a motion picture or other audiovisual work.
- Perform the work publicly by means of a digital audio transmission if the work is a sound recording.
It's absolutely fine if someone thinks that copyright should be stronger or protect more things than it does. It's fine to say that this OUGHT to be illegal. But it's not wrong to point out that there is very little factual support for the claim that it actually IS illegal. Edit: grammar
The key difference between AI art and the automobile is that the latter isn't the result of someone taking the work of thousands of horse-drivers into a blender without permission and using it to create a machine that does the horse-driver's work in their place. People are literally deliberately training AI models on specific artists to deliberately recreate their particular styles. Models are scraping thousands of images without permission or credit. It's not innovation, it's plagiarism and theft. It's practically a meme at this point to see an ersatz Patreon watermark in the corner of AI art because many models are trained on so many stolen pieces.
Agreed. The vast majority of art is made for art's sake. This is the case on Twitter, Instagram, Deviantart, Pixiv, or any of the large image sharing websites, where the vast majority of contributors create art that is not monetized.
The same is true for photography, fiction writing, 3d makefiles, music (just see how many people are actively making money through music vs the number of kids who go to piano lessons), knitting, sculpting, woodworking, crafts etc. Creative outlets will always have people engaging in them for rewards (personal, social) other than just the economic.
Humans are a creative species. Monetization of our creative skills is a fairly new phenomenon, and AI tools are just a new addenddum both in addition to and parallel to our creative endeavours as a species and as individuals.
We did not stop writing fiction when AI writing tools came out, we did not stop composing music when algorithmically created music came about, and we will not stop making art because AI art generation tools exist.
That is not to say that there are no ethical or economic concerns with regards to AI art. But the argument that 'people would stop making art if a computer can do it for you' is very much a strawman argument in this debate, and holds very little water.
The same problem has come up with the invention of the cotton gin, or the automobile, or the printing press, or the computer, or 3d milling, or robotic assembly.
The key difference between AI art and the automobile is that the latter isn't the result of someone taking the work of thousands of horse-drivers into a blender without permission and using it to create a machine that does the horse-driver's work in their place. People are literally deliberately training AI models on specific artists to deliberately recreate their particular styles. Models are scraping thousands of images without permission or credit. It's not innovation, it's plagiarism and theft. It's practically a meme at this point to see an ersatz Patreon watermark in the corner of AI art because many models are trained on so many stolen pieces.
An actual artist trains learning how other artists do techniques, etc. but creates their own unique art (unless they are creating forgeries.) AI is actually using the art created by others and glomming it together with algorithms to create what you see. Everything you see in a Midjourney "created" picture includes pieces of actual art created by human artists, not just their techniques. AI art is more comparable to "sampling" used by musicians, and musicians have to get approval, possibly including payment, from the original artists they are sampling if it's still under copyright. Doing it with a computer algorithm shouldn't somehow magically avoid paying the original copy-written artists for the art it uses.people have been training off other people's art since the invention of art. There isn't anything unethical about it.
An actual artist trains learning how other artists do techniques, etc. but creates their own unique art (unless they are creating forgeries.) AI is actually using the art created by others and glomming it together with algorithms to create what you see. Everything you see in a Midjourney "created" picture includes pieces of actual art created by human artists, not just their techniques. AI art is more comparable to "sampling" used by musicians, and musicians have to get approval, possibly including payment, from the original artists they are sampling if it's still under copyright. Doing it with a computer algorithm shouldn't somehow magically avoid paying the original copy-written artists for the art it uses.
Synthesizers are instruments. They take active human input at all times.Ah yes, much the same way that music is now pointless because someone invented a synthesiser of instruments. Damn flood of mediocrity from those people who can’t even play a piano, or a guitar, or drums, or..
Maybe, just maybe; art will be different. And that’s fine - no actually that’s exciting - to me.
Two options.
1. This splits the anti-AI art movement between Luddites (not as a slur, just the best word for it) and copyright devotees. Copyright devotees will be happy with a solution like this because it fits the magic rules as well as any stock image database does. If a large enough bloc of copyright enthusiasts split off from the anti-AI movement over this, I strongly suspect that there's no longer any political space for the Luddites to get anything done at all. The response will basically be "They made it legal and now everyone's happy with it, what's the holdup?". The resulting implosion of the anti-AI movement is likely to mean that even "non-ethical" AI art tools like Stable Diffusion are eventually widely used if what they're doing is found to be legal in the US. There won't be any organised campaign left to oppose them, and it'll end up just being: If you want to have "ethical AI art", you use the models of very large corporations like Adobe, Microsoft, Disney etc, and if you don't view current AI art as unethical you can use open source or research derived AI art programs.
2. Most anti-AI people aren't actually copyright devotees, but rather Luddites that are just grabbing the most useful argument at this current moment. If this is the case, there won't be a substantial bloc splitting off from the anti-AI movement regardless of how legal and ethical a training database is made, and it will either succeed as an activist movement in forcing the government to make concessions (or splitting industry with boycotts), or it will fail after enough time has passed.
I don't know which way this goes. It's also leaving aside the questions of "Is Firefly actually any good at all, considering the inherently limited training data?" and "Will the United States government let any 'ethics' movement prevent the US remaining the best place for AI researchers to do their research, given the ongoing effort by China to recruit those researchers with better pay and benefits as the two powers enter a new Cold War?"
I am also really interested in what ends up happening with the EU and UK if the anti-AI movement succeeds in getting legal bans etc in the US. The EU has been unequivocal about supporting AI researchers, and the UK has gone well beyond that and given copyright exemptions for basically anyone doing AI research in the country, with no requirement of separation between researchers and businesses. Even if the anti-AI movement succeeds in the US, I don't see how that translates to success in the EU or the UK (or Japan, or China, etc).
I've said this before, but I'll reiterate: I fail to see any way that these "AIs" are actually doing the same thing as human minds when "learning." To be sure, I am far from an expert on the brain. But I don't think this is an assumption we can make. Just because we call it "learning" does not at all mean it's analogous to human learning. Based on what knowledge I have, the two appear very different.If I study all of Picasso's work, and can now make a new painting clearly in the same style (but not any subject he's painted before), am I legally infringing his rights if he is alive today?
I would've thought such questions are long settled and there is a clear-cut answer already. (But I don't know the answer -- yes or no?)
"new invention destroys some people's livelihood" -- that happens all the time in all different fields, people/society has to adapt (provide training for new job roles etc.) but IMHO that is not a consideration for why such new invention should be disallowed
I've seen this sort of negativity toward artists a lot lately. Where the heck did it come from? Are we really going to cast the friggen artists as the bad guys here? We don't even know if there are any bad guys.Will artists stop complaining now?
My guess is no, because they never cared about copyright - they just used it as a blunt object.
There aren't bad guys, at this point. There are free-loaders, who are teaching machines to ape other peoples' work on a massive scale, without the right holders' permission or knowledge.For those of you concerned about AI art dominating the industry, this might give some comfort, if you haven't seen it. Bear in mind that a lot of questions remain and this is far from definitive, but I find it very interesting. If you can't copyright an image from an AI, that would dramatically reduce the appeal in today's IP-driven creative industry. The argument is interesting too, although I'm skeptical that it will hold up under closer scrutiny.
I've said this before, but I'll reiterate: I fail to see any way that these "AIs" are actually doing the same thing as human minds when "learning." To be sure, I am far from an expert on the brain. But I don't think this is an assumption we can make. Just because we call it "learning" does not at all mean it's analogous to human learning. Based on what knowledge I have, the two appear very different.
Of course, this doesn't really sway the verdict either direction, I'm just tired of hearing this argument.
I've seen this sort of negativity toward artists a lot lately. Where the heck did it come from? Are we really going to cast the friggen artists as the bad guys here? We don't even know if there are any bad guys.
google search "fan art". Many artists create art based on other people's art.An actual artist trains learning how other artists do techniques, etc. but creates their own unique art
Thats not how they work. They train off the image, they aren't using the actual art images and mashing them together. What they create is completely original.glomming it together with algorithms to create what you see. Everything you see in a Midjourney "created" picture includes pieces of actual art created by human artists
No, because they aren't using the art. Its like listening to a song and deciding you want to make music that sound like that.AI art is more comparable to "sampling" used by musicians, and musicians have to get approval, possibly including payment, from the original artists they are sampling if it's still under copyright.
They train off the art but they aren't using the art? They tell the machine "do it like this," but that isn't using the unlicensed art?google search "fan art". Many artists create art based on other people's art.
Thats not how they work. They train off the image, they aren't using the actual art images and mashing them together. What they create is completely original.
No, because they aren't using the art. Its like listening to a song and deciding you want to make music that sound like that.
I can train off the art same as the AI does. I can try and paint something in the style of Boris Vallejo, or Picasso, or a random person on DeviantArt. So I do my painting and create an original work based on their style. But I don't owe them anything, I don't have to ask permission. I don't need a license to learn from their style.They train off the art but they aren't using the art? They tell the machine "do it like this," but that isn't using the unlicensed art?
Reconcile that, please.
correct. Much like how I can commission an artist to draw an image of the pokemon Poochyena. They would use official art as reference, but not actually use the image for making the art.They train off the art but they aren't using the art? They tell the machine "do it like this," but that isn't using the unlicensed art?
Reconcile that, please.
Because you aren't a fucking machine that can replicate this style ad infinitum. Plus, you probably do it poorly.I can train off the art same as the AI does. I can try and paint something in the style of Boris Vallejo, or Picasso, or a random person on DeviantArt. So I do my painting and create an original work based on their style. But I don't owe them anything, I don't have to ask permission. I don't need a license to learn from their style.
Why does the AI need a license to learn but I don't?
Because you aren't a fucking machine that can replicate this style ad infinitum. Plus, you probably do it poorly.
This isn't hard, dude.
I see your point, but…the name “Adobe” is golden for many content creators and, especially, art directors (for better or worse). If they're already in the Adobe ecosystem I think they will definitely use this. And if this program is properly integrated with other Adobe products it becomes a no brainer for those people. Just my opinion.The basic problem, from my perspective, is economic. I'm not convinced that, in the long run, professional artists and their employers will actually be willing to pay for ethics as a line item. As one part of a broader creative suite, for which you're already paying anyway? Maybe. But nobody is going to spend money on an ethical model when the "unethical" model is just as good or better (because it has access to more training data), and is also cheaper, unless they have actual legal risks. If those legal risks materialize, then maybe this will be a different conversation, but I don't think anyone has plausibly alleged, before a court of competent jurisdiction, substantial similarity between an AI model's training data and outputs. If that never happens, then eventually people will stop worrying about it happening.
My main issue is with the role of the artist in creating the work. Mass reproduction thereafter cheapens it further.So your main issue has to do with volume of production. I see. I could argue against the printing press with that kind of reasoning. A clarke or scribe can only copy a work so quickly whereas a press can create thousands of them in the same amount of time.
I personally think that AI art needs to be correctly sourced so the viewer knows what they're looking at. Perhaps an AI mark similar to a copyright mark.
your argument is that mass reproduction is bad? Are you anti-printers and the copy/paste function of computers?Mass reproduction thereafter cheapens it further.
As a former graphic designer for an advertising firm, I can tell you that speed and utility will win in almost any commercial advertising company. I do think it will cause there to be less positions in commercial art, unhappily.There’s no such thing as ethical AI image generation if it’s used to replace art made by a human. It’s bad for artists financially and for our culture
My main issue is with the role of the artist in creating the work. Mass reproduction thereafter cheapens it further.
Keep digging, son.
Don't be dumb.your argument is that mass reproduction is bad? Are you anti-printers and the copy/paste function of computers?
My main issue is with the role of the artist in creating the work. Mass reproduction thereafter cheapens it further.
Nah. That's not true at all. For a lot of celebrities, it is training off a few very obvious movie promotional stills and will look nearly identical. Plus there are tools where you can search in the database used to train Stable Diffusion, and sure enough - neatly tagged are the exact images you think.If it creates "portrait of a golden retriever" that looks materially similar to an artist's, chances are that a) there are a lot of similar portraits, b) there would be enough differences for a jury to find them to be non-derivative works (if that were possible, which it isn't, since there's no copyright to argue over), and/or c) the training set was somehow incredibly limited on dog portraiture.
Nope. Not my argument.By extension of that thinking an artist cannot sell copies of their own work, then.
As per every dictionary ever written:
There is no such thing as AI "art", and there never will be.
Call it something else: Graphics, imagery, creations. But AI doesn't ever make "art". If you want to reference, praise or blame someone, go to the person using an AI tool, and or the people who wrote its code. THEY can create actual art with the tool called AI.
[Need some convincing? When has a paintbrush ever created art? Never. The artist uses a tool, such as a paintbrush, to create art. It's that simple. Tools create nothing and never will.]
There was no such thing, now there is. The dictionaries will be updated accordingly.As per every dictionary ever written:
There is no such thing as AI "art", and there never will be.
Call it something else: Graphics, imagery, creations. But AI doesn't ever make "art". If you want to reference, praise or blame someone, go to the person using an AI tool, and or the people who wrote its code. THEY can create actual art with the tool called AI.
[Need some convincing? When has a paintbrush ever created art? Never. The artist uses a tool, such as a paintbrush, to create art. It's that simple. Tools create nothing and never will.]
This won't be decided by the audience or artists, but by business executives that will see $40 per month vs $3000. A lot of people in a few years will see comparisons like that, in programming, legal assistants ... every office job. You will need two senior coders to do the work of ten people... most probably you will have two junior programmers doing the job of ten senior ones and from time to time you will have the seniors consulting. It's cheaper that way.Eh, if the art someone can produce can be simply replaced by a robot then I’m not sure they were much of an artist to begin with.
The good artists will still get paid and if they’re smart they will use AI for inspiration or to do more tedious or labor intensive parts of their job. AI might put the mediocre ones out of a job though.