AI image synthesis is getting more capable at executing ideas, and it's not slowing down.
See full article...
See full article...
In a technical report on SDXL listed on arXiv earlier this month, Stability complains that "black box" models (such as OpenAI's DALL-E and Midjourney) that don't let users download the weights "make it challenging to assess the biases and limitations of these models in an impartial and objective way." They further claim that the closed nature of those models "hampers reproducibility, stifles innovation, and prevents the community from building upon these models to further the progress of science and art."
This one is so unsettling it feels straight out of SCP"Frosted prick cereal" huh. We should all now be thankful that Dall-E is not British, I suppose.
Well duh.... Jesus is white after all.So DALL-E decided that the queen of the universe is a white human. Big surprise.
I also find that picture unsettling (and hilarious, in its own way, that the cereal is doing a Dreamworks face), and I think it’s because of the hand. The skin seems unnaturally smooth and moisturized.This one is so unsettling it feels straight out of SCP
If progress in AI continues along the path we're seeing, it's possible that every creative person at home with an AI-running machine will potentially become like the CEO of a major creative company today if they know how to wield it.
That sounds great - in theory - but how do you police such a claim, especially as the technology improves and it really will be totally indistinguishable from a human-made artwork?The quality is great, although a lot of these images still have a bizarre, unreal quality to them.
I'm less concerned about the loss of jobs for artists than I was before due to movement in the legislative direction but also just movement by people themselves.
In the boardgame space we've seen that projects announcing themselves to be AI-art free (and hence supportive of real human artists) get a very positive reception.
If people need to ask themselves why they would value human-made art over AI-art then the answer is actually pretty simple: A lot of people value the effort other people put in to producing things and don't react as positively to what they see as someone "attempting to make a quick buck", so to speak.
I find it curious that several of the images with writing in them have defects (as cited by others above). Some of the defects appear as spelling errors (e.g. "marsmallow" missing the "h"), while in others the writing is illegible (look at the "Gerbil Essences" bottle). Have we identified a DALL-E 3 "tell"?
DALL-E's text rendering ability isn't perfect—some words have extra or missing characters, and others seem garbled at times. The team speculates that this is due to the token encoder they used. Tokens are fragments of words (and sometimes whole words) used to represent words in machine learning models such as GPT-4 and the prompt interpreter for DALL-E 3. The reliance on tokens sometimes creates a type of blindness for certain words or spellings when chunks of words get lumped into a single token together.
You sound like if a supervillian were a redditor, why does every AI booster have to be so lame.For thousands of years, we've told ourselves that we as humans are unique and special among animals because we are creative—we are toolmakers. We have language and grammar. We can reason. We've seen in the past year that our place as the center of the intelligent universe is no longer assured, seemingly being chipped away month by month due to new machine learning research. It's been a Copernican moment, akin to the demotion of the Earth from the center of the universe.
If you've ever made proper buttermilk pancakes at home from scratch you will know that the "buttermilk pancakes" served at most restaurants are nothing at all like them in taste or texture. Yet, patrons happily order them along with their breakfast eggs.Gwendolyn Wood said:I hope that handmade art will remain something that matters to people.
Yep. It encodes a compressed representation of the training distribution. That means it doesn't just learn biases present in the training set, it actually amplifies them. This is made worse by the fact that future rounds of model training will undoubtedly include AI-generated art, creating a feedback loop.So DALL-E decided that the queen of the universe is a white human. Big surprise.