The take I heard yesterday is that the data centre builds are largely unusable for other computing tasks. Super expensive in build, maintenance and running costs, and barely usable for other things besides LLM processingWhen the AI bubble pops, there will be collateral damage.
Just taking down the LLMs wouldn't do anything about other forms of generative AI, e.g., for creating music and art and video.Funny, since someone in the last thread was trying to argue that LLMs aren't overhyped nonsense.
You know what's super cool? "AI" that figures out how to fold hundreds of thousands of proteins in the time it used to take to figure out, like, five.
You know what is decidedly less cool? Lying chatbots that pretend to be human and services that make fake music and art based on the works of countless thousands or millions of actual, real, human, and thoroughly uncompensated artists.
with all the recent comments about Bezos and other techbro CEOs picking names for projects/products/companies that seem to tempt the gods of nominative determinism or expose a near sociopathic misunderstandings of the source material
why ?
View attachment 122532
(sorry if I missed the explanation)
The name is based on this emoji:with all the recent comments about Bezos and other techbro CEOs picking names for projects/products/companies that seem to tempt the gods of nominative determinism or expose a near sociopathic misunderstandings of the source material
why ?
View attachment 122532
(sorry if I missed the explanation)
“Both” is also valid here.It's not a bubble because it doesn't work (or because it does), the promise was not a working AI for the masses, the promise was infinite money for investors, which would fail even if everything went perfectly. The bubble won't pop because AI doesn't work, it will pop because it never made the amount of money it was (unrealistically) expected to.
Pulling AI tasks from the cloud doesn’t necessarily make it economically viable. It just means you pay for your own server and maintenance instead of paying Amazon to do it for you. It makes sense in some situations, but not in many others. Among the people in the production industry I’ve spoken to, the motivation for local image AI is usually not cost, but risk of data theft.Just taking down the LLMs wouldn't do anything about other forms of generative AI, e.g., for creating music and art and video.
Is generative AI for images, music, and video also operating at a financial loss, so it's unsustainable? I don't know. Image generation can run locally, so there could be viable products even without being propped up by VCs. I don't know whether that's really feasible for music and video.
Which still raises the question of what use is semantic search when the material being searched is 99% slop generated en masse by the exact same tools? LLMs haven’t delivered search, they’ve incapacitated it.ChatGPT delivered semantic search. I would argue Google still hasn’t delivered semantic search. Most of the rest of consumer AI is just a better UI, sometimes on top of real ML advances. The rest is just piffle, puffery, and some huge VC losses in 2026.
forcing itself down our collective throats ?The name is based on this emoji:, technically known as “Smiling Face with Open Hands”. It’s even shown at the top of the page if you go to https://huggingface.co/. However, I think your version does a superior job of capturing the essence of generative AI.
I recall some story from ‘29 that some New York broker sold when his shoe-shine boy gave him stock tips.My manufacturing company has been pushing us to use AI for the past year for no reason, and just today i heard someone mention about it being a bubble. A week or two ago my mom who can barely use a smartphone talked about investing in AI. We're definitely "slowly then suddenly" phase.
I can only hope that the coming recession doesn't turn into a full-blown depression, and that it brings PC component prices crashing so i can buy a gaming PC to hold me over in between going to food banks.
Yes, that's super cool, but no sociopathic multi-billionaire or hedge fund will senselessly throw billions at it, because there's only business cases that promise to make them just a few dozen Dollars per Dollar invested. Like curing cancer or so.Funny, since someone in the last thread was trying to argue that LLMs aren't overhyped nonsense.
You know what's super cool? "AI" that figures out how to fold hundreds of thousands of proteins in the time it used to take to figure out, like, five.
Also true, but the business case "replace workers with chatbots, lying or not" was advertised to them to make them thousands of Dollars per Dollar invested. So that's where most of the investment ended up.You know what is decidedly less cool? Lying chatbots that pretend to be human and services that make fake music and art based on the works of countless thousands or millions of actual, real, human, and thoroughly uncompensated artists.
How is falling $/token going to make it more likely that it will be possible to have sufficient profit to earn back the money invested?we can agree there’s froth at the foundation‑model p&l while also noting the application layer is already monetizing and the engineering curve keeps cutting cost. i’m not betting on a basilisk; i’m betting that “agents + tools + falling $/token” keeps compounding.
But LLMs arent really learning and are no more ML than they are AI.He should replace “AI” (a marketing term since it’s inception I. The 60s or 70s, can’t remember), with ML, and then I’m OK with that statement.
Thing is that ML is everything invented before LLMs, and LLMs got tacked “AI” moniker exclusively to hype them.
Nasdaq was at 2,192 in Dec 1998, and peaked at 4,696 in Feb 2000. But it crashed after that and did not achieve 2,192 again until late 2005, and 4,696 until 2015. The question is--are we at Dec 1998, Feb 2000, or somewhere else?Reminder that people were saying tech stocks were a bubble in late 1998 and then the Nasdaq fully doubled in price during 1999-2000.
Bubbles don't pop until the fear of losing out in a crash overcomes the fear of missing out in a boom.It is a tough bubble. Media is openly reporting that it is a bubble. But we all keep stampeding towards the wall.
Too many are still in denial. Too much invested?
The biggest name in protein-folding neural networks is google, afaik. I'm not defending the chatbots, but the real work IS still trucking along in the background.Yes, that's super cool, but no sociopathic multi-billionaire or hedge fund will senselessly throw billions at it, because there's only business cases that promise to make them just a few dozen Dollars per Dollar invested. Like curing cancer or so.
A clever reference to the fact that in the hit experience chess, the bishop was originally an elephant.As one of the developers of the Roomba wrote 35 years ago: "Elephants Don't Play Chess".
Basilisk is already dead. Companies are already ready to burn all their cash and bet their existence on reaching AGI, no looming threats are required.So yeah, maybe I'm just one more normie who will someday be consumed by the basilisk, but I'll believe in the digital sampo when I see it and not before.
And torment nexus is another reference that is so overused that it might as well get retired.a near sociopathic misunderstandings of the source material
They're being used all the time quite literally everywhere and have been for at least a decade now. Just think of e.g. how many IP-cameras these days come with object detection features? Yeah, those use these AI, but not LLMs.But who will be using the specialized models? Humans? Of course not. It will be the general models using specialized models using even more specialized models. Models all the way down...
Of course Roko's Basilisk is even more imaginary than the prospect of any LLM becoming an AGI.I think that there is some question-begging here. Just because LLMs are improving in their performance does not mean that there is not an economic bubble around them, because even the advanced models haven't yet shown an obvious path to profitability.
But leaving aside the bubble question (which is really subject of the article): The belief that the LLMs will soar to superintelligence and then these superintelligences will gift us with money and other fine things makes some pretty strong assumptions. I might feel better about the notion if the LLMs were trained on data that had itself been generated by superintelligences. It would surprise me very much if, by simply increasing compute and data, we were able to develop a system that could simply design, say, a robot with a working proprioceptive system or a neural prosthesis.
So yeah, maybe I'm just one more normie who will someday be consumed by the basilisk, but I'll believe in the digital sampo when I see it and not before.
The problem with LLMs, at least in how they are currently built, is that they really are just stupidly expensive & complicated auto-complete machines; they're incredibly flexible in how they can twiddle with words, just like a super-dooper version of the Eliza program from the 1960s, but also like it, they don't actually understand anything in any real way.LLM's in their current form, are in a bubble. Maybe there will be some breakthrough that happens where they stop sprouting utter garbage 69% of time (Yes, that's a made statistic just like LLM's like to hallucinate.) LLM's will only start being useful when they're around 99.9% - 100% correct. Without being able to trust the product and that breakthrough coming soon, I can't see LLM's lasting and bubble will pop.
ML and other parts of 'AI' are much more useful and purpose built, and there lies the problem. It's hard to market something that's a niche product to the masses.