Ars Live: Is the AI bubble about to pop? A live chat with Ed Zitron.

dzid

Ars Centurion
3,373
Subscriptor
I agree with your points and this bit (my bolding for emphasis) probably explains a lot of broader social issues around the world.

If we're chasing growth at all costs but said growth is in tech bubbles and property rather than productive labour activity, then most of the population is not going to feel the benefit of it and is going to get a bit angry.
The only thing that I would add to that excellent summary is that it shines a bright light on the single most important element that we can and must focus on: who the anger should be directed toward: the oligarchs who continue to put so much effort into pitting Americans against each other (and this most definitely applies to other countries as well).

ETA: An earlier comment referenced what I think may well be a valid use case for this technology at the current burn rate, from an oligarch's point of view (particularly Yarvinites). So they may find it worthwhile to stave off the implosion scenario that would constitute the grand finale for a normal business.

The strategy for us remains the same. I expect it to be a very tall order, but not impossible.
 
Last edited:
Upvote
14 (14 / 0)

The Lurker Beneath

Account Banned
6,682
Subscriptor
Upvote
-1 (0 / -1)

Stuart Frasier

Ars Tribunus Angusticlavius
6,480
Subscriptor
I would like to also throw out the digressive and totally off topic hot take that Dr. X was the actual deuteragonist of The Diamond Age - not Hackworth - and is arguably the most heroic of its characters, don’t @ me
I would have agreed with that take until Dr. X’s last interaction with Hackworth, where he expresses regret for saving the girls.
 
Upvote
3 (4 / -1)

nimelennar

Ars Tribunus Angusticlavius
10,028
Hey, Dr. X did the best he could.
To some extent. It's a little irresponsible to not vet the coding of the Primers (which meant that instead of brainwashing the orphan girls to serve the Celestial Kingdom, Hackworth was able to brainwash them into a Mouse Army to serve Nell).

I can see why X thought (until he actually tried it and saw its limitations) that the Primer was a good solution to his problems. But I think that, in order to properly judge how well he did in providing a solution to the problem of how to raise those girls, you'd have to know what his available alternatives were; I don't think the text does a good job of exploring that question. It seems like the options considered were either the Primer or just abandoning the girls, and I am not convinced those were the only available options (What would have happened if they couldn't recruit Hackworth? Would abandoning the girls really have been a foregone conclusion, in that situation? There were truly no other options?).
I would have agreed with that take until Dr. X’s last interaction with Hackworth, where he expresses regret for saving the girls.
I don't think that's a correct way to characterize that interaction.

What I would say is that Dr. X would probably disagree with the proposition that he did the best he could:

Dr. X considered it. "It would be more correct to say that, although it was virtuous to save them, it was mistaken to believe that they could be raised properly. We lacked the resources to raise them individually, and so we raised them with books. But the only proper way to raise a child is within a family. The Master could have told us as much, had we listened to his words."
 
Upvote
2 (3 / -1)
Post content hidden for low score. Show…

Tango*Urilla

Smack-Fu Master, in training
82
Subscriptor++
My issue is that it's not a great sounding board. Using wikipedia's definition:


ChatGPT provides loads of comments. Comments that are often plain wrong and I have to figure that out the hard way if I'm trying to use it to write something new. (If I'm writing something old, I just use my own brain with a few web searches to refresh myself on syntax or specific library function names.) And then if you tell it it's wrong, or just ask it if it's sure, it's a yes man that will tell you oh great sir, I apologize and you are correct, and here's what I should have said instead. It will tell you that if you are wrong and it was right to begin with.

At least, that's my experience with it as someone who has been coding for a living for 30 years. I don't hate the idea of it (beyond the ethical problems behind training on data without permission), but I do just find it to be an idiot. And you know, sometimes other human programmers are idiots as well. But I don't need to go seek that kind of thing out to contribute to how I write my code.
I think many of you professional coders are forgetting that there are quite a few people out there (myself included) who never learned to code professionally. Yet we still occasionally need working code. It doesn't have to be elegant or safe, because it's just meant to run once or handle a one-off task. This applies to both personal and professional contexts and can genuinely add value to our lives and work.

In my case, I frequently need some basic data science tasks done in either R or Python. It's never production-grade, but it's still complex enough that I can't manage it on my own (or would need to invest countless hours I don't have, since I'm a highly specialized professional in an entirely different field). However, I don't have access to an affordable data scientist who would spare me some time, especially not for the mundane tasks I need completed. This is exactly the gap where Claude and ChatGPT excel for me. My productivity has genuinely skyrocketed since they've become my personal bachelor's-level data scientists - they can handle things I couldn't do on my own (though I'm still able to sanity-check their output), and they're available on the train, on weekends, whenever I need something done. And it's fun to use, to boot!

That's why I'm using the hell out of them while they're available at current prices.
 
Upvote
2 (10 / -8)
When will the bubble pop is a valid question, as all bubbles do. But a better question to me is what does AI actually do?

Apparently, it can create code pretty well. We can already do that, but the machine can do it cheaper. Fine. I suppose AI could design and fabricate and build, say, cars, but we have had machines doing these things for 50 years. Or AI could be a chef, and we could eliminate the kitchen staff and save money. Again, we have had machines for 50 years that are able to do that, but they don't bring any creativity to it. They follow the recipe, the programming. It would be like Applebee's. Which isn't necessarily bad, but are they going to spend twenty billion dollars to make the food that the Applebee's staff is already making? Probably not.

I guess AI could do the research and writing to get me through high school and college. Maybe. But I had Earth Science in 7th grade. If I had ChatGPT write my term paper, it would probably insist that the dinosaurs created airplanes in the Carboniferous Era. So I might have to repeat 7th grade. Since AI likes to just make shit up, it's not any good for anything that is fact based.

So what CAN AI do? If you need a fake friend, AI can be that for you. It may eventually tell you to take your own life, so YMMV. Outside of that, I'm really not sure. AI does few tasks well, but the tasks it can do we already have much cheaper machines, or people, that do the job as well. So you have to ask yourself how much money do you want to spend to get what you have now, only seem cooler? Does cool have a monetary value? No, no it does not.

AI looks a LOT like the Magic Blood Machine that Elizabeth Holmes is in prison for right now. In the same way that Holmes promised her machine would change the world, the AI people say AI will change the world. But AI would fail Earth Science, claiming gasoline comes from dinosaurs. So the reality is, RIGHT NOW, AI is worthless. Could AI unify physics, and do away with bullshit such as dark matter and string theory? Perhaps. But right now it can't be trusted to tell you what the weather will be tomorrow. And yet, our economy is largely based on AI and Tailor Swift.

To paraphrase Spock, all bubbles burst. As will the AI bubble. And since so much of this money is coming from high profile people, if AI turns out out be a grift like the Magic Blood Machine, there will be hell to pay. Because in the United States, it's perfectly fine for monied people to steal from poor people. But when less monied people steal from the wealthy, the less monied people go to fucking prison. And AI sure looks like a fucking grift to me.
It is providing extreme value in research and development. If used for their intended purpose AI is an extremely powerful tool. There is this strange and exclusive focus here on LLM chatbots that indeed have questionable use cases. But if you know what you are doing, and don't try to use a LLM as a calculator, it is very exciting what is happening. Scientific progress is already accelerating because of AI models. There are countless instances in the life sciences, mathematics, physics, material sciences, medicine etc. where AI has helped progress. And it is just 3 years since Chatgpt launched.
 
Upvote
-16 (4 / -20)
Post content hidden for low score. Show…

Architect_of_Insanity

Ars Tribunus Militum
2,149
Subscriptor++
When it does pop it's taking the rest of the economy down with it.
Don't worry... our underperforming 401k's will provide the safety net for all the incredible wealthy to buy stocks with their liquid assets at basement pricing and reap even more wealth on the rebound.

Maybe I can retire after I give my expected two weeks notice when I die. Just prop me up at my desk and duct tape me to the chair - I can answer the help desk line.
 
Upvote
17 (17 / 0)

nimelennar

Ars Tribunus Angusticlavius
10,028
There is this strange and exclusive focus here on LLM chatbots that indeed have questionable use cases.
The thing is, that (and image generating tools like Stable Diffusion) are basically all that's being sold to consumers; they're what's being shoved in our faces by operating systems and office suites and search engines and social media and customer support pages and and and...

The argument you're making is similar to if people were complaining that, say, the newest refresh of the Ford F-150 was a pile of crap, and your response was that Ford's NASCAR research was producing some great new developments. Sure, that may be true, but that's hardly a comfort to people being bombarded with ads for a crappy pickup truck they don't want.

Heck, even you are falling victim to the "strange and exclusive focus here on LLM chatbots':
And it is just 3 years since Chatgpt launched.
You're using as your baseline for where to measure progress for AI... not machine learning software in general, not even the GPT LLM itself, but the chatbot version of it.
 
Upvote
28 (28 / 0)

hillspuck

Ars Scholae Palatinae
2,179
I think many of you professional coders are forgetting that there are quite a few people out there (myself included) who never learned to code professionally. Yet we still occasionally need working code.
I'm not really forgetting it. I'm just talking about using it for the non-basic things. I find it to generally work very well for grunt-work things like writing powershell scripts or some simple python code. I'm talking about using it to write code that wouldn't take me a small amount in knowledge checking and a larger amount of just time cranking it out. And where it reaches the complexity that trips up existing AI.

It doesn't have to be elegant or safe, because it's just meant to run once or handle a one-off task.
I'm not talking about being elegant or (mostly) safe. I'm even talking about working at all. It produces code that won't even compile.

In my case, I frequently need some basic data science tasks done in either R or Python. It's never production-grade, but it's still complex enough that I can't manage it on my own (or would need to invest countless hours I don't have, since I'm a highly specialized professional in an entirely different field).
And this is why we're fucked. You're going to use it to do "basic data science taks" that "I can't manage on my own". So when it goes wrong in some subtle way, the equivalent to forgetting to use the right units, you're going to be metaphorically crashing your rocket. Not just you but all the people who use it to do things they can't do themselves. At least AI as it exists now.
 
Upvote
26 (26 / 0)

doubleyewdee

Ars Scholae Palatinae
841
Subscriptor++
I'm honestly shocked that Ed is so lauded by the tech reporting industry after his disastrous interviews. He's an emotional wildcard, and his interviews and emotional reactions on This Week in Tech and On The Media should give serious pause to anyone considering him for a logical and reasoned approach.


I think he has a lot of fans simply because he is a grifter. Yeah, I know, he calls everyone he doesn't like a grifter. But his entire modern career is based off of affirming the anti-tech hysteria bias surrounding LLMs specifically, but AI in general. He's laid many bricks that have become this weird anti-AI-anti-Tech-as-a-peronality-trait thing. It's horrible on reddit, and I'm sad to see it happening on Ars of all places.

I wouldn't say EZ is a 'grifter' precisely. He's talented and funny and has a certain viewpoint that I find valuable, even though I often don't agree with him. However, your point that he is personally very invested in terms of both income as well as social cachet in AI "collapsing" is well made. He makes a living producing media for an audience that predominantly agrees with him and thus patr(e)onizes his output. Which is fine! He might even end up correct entirely, but I do think he trades to some extent in emotion and getting folks riled up, and I think he overstates his positions because that's what the audience wants. He's a fun follow and a fun read, but not where I'd go for deep technical analysis on trends, or an even-keeled view of the market.

What is missing, I think, is that there are a myriad of voices saying we're absolutely in an AI bubble, but demonstrating the nuance is that it's more like the .com bubble in 1999-2001. As opposed to, say, the subprime mortgage bubble, or the crypto bubble(s) where there were actually basically zero fundamentals, no use cases, no net positive value from this stuff, and/or the risk was crazy high at every turn. People are taking very big, often stupid, risks on AI/LLMs, betting the farm on replacing staff with largely unproven tech, etc. However, there's also a myriad of uses for both LLMs, and broader AI, that are already demonstrating genuine value/ROI/novelty. Moreover, in the AI-verse, at least the calls are coming from inside the house, whereas you saw much less of this in the purely speculation-fueled bubbles. Those calls, notes, urging for caution need to be heeded more than they are, but at the end of the day it's nigh-impossible to prevent people from buying lottery tickets.

To go back to the gold rush analogy, much gold was retrieved during that period, but the human cost of the gold rush and the suffering caused due to an overabundance of "get rich quick" folks crashing out was very high. I think Ed's best work is when calling out these social costs and holding a bright light up and saying "this is not okay, people are being actively harmed by all this." For my own comfort, I feel he tends to speak a bit too much in absolutes, which dilutes his message, but I think his heart is in the right place.
 
Upvote
18 (20 / -2)
Twenty-six years. And you know what? I miss it. It's a huge pain in the ass to do all that stuff myself. And I'm probably missing a lot of deals and great opportunities due to lack of deep knowledge that can only exist when someone makes it their primary job.
My wife is like a prosumer traveler. It’s not her job, but she loves to travel and is so good at putting together vacations. Friends ask for her help all the time for their travel plans. It just takes lots of online research and sketching out a plan of everything you want to see and do, once you find out what’s available. She’s especially good at the nature side of things. We once spent a week in Vegas with only one day on the strip and the rest hiking the amazing countryside in every direction. Who knew? She’s just as good at any spot in the world you can imagine. But you’re right, this is what travel agents are for.
 
Upvote
9 (9 / 0)

hillspuck

Ars Scholae Palatinae
2,179
My wife is like a prosumer traveler. It’s not her job, but she loves to travel and is so good at putting together vacations. Friends ask for her help all the time for their travel plans. It just takes lots of online research and sketching out a plan of everything you want to see and do, once you find out what’s available. She’s especially good at the nature side of things. We once spent a week in Vegas with only one day on the strip and the rest hiking the amazing countryside in every direction. Who knew? She’s just as good at any spot in the world you can imagine. But you’re right, this is what travel agents are for.
Lucky you! And yeah, that's the beauty of travel agents doing it as a job. How many times do you go to Las Vegas in a year? Travel agents can spend even more of that precious time doing the research because they're doing it for several clients. And they can just refer back to it and update/expand it.

I'm the only one in the family that will do it, and I don't enjoy it. Puts all the weight on me to get things right, so if something goes wrong with my plans (or there is a hole in them I didn't see), I start to get extra stressed out and it ruins the vacation for me.
 
Upvote
4 (4 / 0)
PLEASE post a transcript - human prepared or AI-assisted - after the presentation. I would LOVE to see what questions are asked, and what points made. It's my understanding there have been three prior AI "winters"; it's interesting that there's rumbling about a fourth, now, following its most bullish cycle yet.
 
Upvote
11 (12 / -1)
After seeing Sora 2, and seeing interviews with people like Larry Ellison, Theil etc. Plus the fact that we know Musk is stealing any personal info he can get from government systems. Im pretty sure the plan is to feed some LLM every last bit of information these people have on all of us, and then deliver us all some extremely personalized distortion of reality that keeps Heritage Foundation hand-selected CHUDs like Trump in power.
Sora 2 is good. But I saw mistakes and a good AI check will show this. Thus, I agree that these tools, once sharpened, will hone out our very existence. The Antichrist is here, and he's an AI.
 
Upvote
-9 (2 / -11)
The everything bubble already popped. Highly skilled professionals with decades of experience can't get hired. The shrinking number of jobs are being gate kept by incompetent and morally corrupt MBAs who only care about securing their position. The top few percent of earners are responsible for almost all discretionary consumption. We are already past the point of no return.
 
Upvote
5 (10 / -5)

Snark218

Ars Legatus Legionis
36,678
Subscriptor
I wouldn't say EZ is a 'grifter' precisely. He's talented and funny and has a certain viewpoint that I find valuable, even though I often don't agree with him. However, your point that he is personally very invested in terms of both income as well as social cachet in AI "collapsing" is well made. He makes a living producing media for an audience that predominantly agrees with him and thus patr(e)onizes his output. Which is fine! He might even end up correct entirely, but I do think he trades to some extent in emotion and getting folks riled up, and I think he overstates his positions because that's what the audience wants. He's a fun follow and a fun read, but not where I'd go for deep technical analysis on trends, or an even-keeled view of the market.
I really, really doubt there's more for Ed in being an AI contrarian than there is in being one of the very many breathless hype-men swooning over OpenAI. He could make a comfortable living doing that, and very probably a more comfortable living than doing what he's doing. I don't think making a living as a writer with a strong (and defensible!) viewpoint means his viewpoint is shallow or biased or irrational. And it's not clear to me that his view of the market isn't even-keeled, or less so than all the sycophants who breathlessly report on every promise and speculation as if they're prophecies.

Also, why would he need to give deep technical analysis? He's providing an analysis of the market and the broad social and economic trends, not the tech or its specific ins and outs.
What is missing, I think, is that there are a myriad of voices saying we're absolutely in an AI bubble, but demonstrating the nuance is that it's more like the .com bubble in 1999-2001.
Missing from where?
As opposed to, say, the subprime mortgage bubble, or the crypto bubble(s) where there were actually basically zero fundamentals, no use cases, no net positive value from this stuff, and/or the risk was crazy high at every turn.
So as far as AI goes, I'm not sure I see the distinction you're trying to draw here; the fundamentals are really shaky and every AI outfit is burning many billions per year with no end in sight, Sam Altman has said plainly and often that he expects the users to come up with the use case, the net value seems to accrue mostly to big investors and executives in the AI space, and investing this much in a technology that cannot scale without access to correspondingly vast amounts of power, GPUs, data centers, and investor capital sounds like a risk that makes subprime mortgages look like a great bet.
People are taking very big, often stupid, risks on AI/LLMs, betting the farm on replacing staff with largely unproven tech, etc. However, there's also a myriad of uses for both LLMs, and broader AI, that are already demonstrating genuine value/ROI/novelty.
Machine learning and so on already demonstrated ROI and value. As far as generative models go? I think there is probably genuine value in smaller, device-hosted generative models, and in models specialized for coding and other specific tasks. It's not clear to me that the giant, power-sucking LLMs were a necessary intermediate step to get there, and it's not actually clear that "small language models" are really any better or more valuable or guaranteed to be successful than the larger ones.
 
Last edited:
Upvote
29 (31 / -2)
If the biggest AI business valuations depend on the big boys pouring more and more capital into buying up all the available silicon for many, many more years, so that most businesses just have to buy from them instead of running their own clusters and open source models, well, I can't see how this is sustainable in any sort of a market economy. No matter how useful the technology.

Unless, of course, some future AI legislation will make it illegal to run non-approved models on non-approved hardware. And municipal AI will of course be considered socialism, and right out. Hmm.
 
Upvote
5 (5 / 0)

Fatesrider

Ars Legatus Legionis
25,121
Subscriptor
The best future for AI is one where it's boring, ubiquitous and relatively useful. Grifters and thieves rely on hype, novelty, and unfamiliarity.

Smart-enough, energy-efficient LLMs without anyone pretending they'll evolve into machine gods - sounds nice.
The one thing they can't get is "energy efficient LLM's".

Each token costs a set amount. It doesn't really matter what that set amount is. It's that they're set amounts in the first place. And thus far, despite all the tech on the planet being thrown at it, the costs per token per user do not pay for the energy the token takes to return a result.

The CW in AI is to reduce the costs of the token by cheaply generating energy themselves, instead of buying it off the grid, which is more expensive. All well and good (other than burning the planet to ashes, but that's a side issue at this point), except that cost per token doesn't decrease ENOUGH to make each token return ANY profit. It's still more expensive to run the token than the inquiry was paid to do it. And the ONLY way to reduce the cost is to reduce the cost of the energy it took to generate that token. And that, so far, hasn't become cost effective, nor particularly energy efficient.

I'd also argue that "smart-enough" isn't smart enough. It's more of a social expectation thing than anything else, but if one is going to consult the oracle, the oracle had better be perfect. Especially if you're paying through the nose for it. People EXPECT machine-like perfection from computers, since for the most part, that's what they typically get. LLM's OTOH, don't return perfect results RELIABLY ENOUGH to meet that expectation. And if you have to "fact check" your results, then that's YOUR time "wasted", further reducing the efficiency of using LLM's to shortcut a task in the first place.

From a business POV, LLM's are a fucking NIGHTMARE because you can't charge the cost per token per user (so far, I still leave the room, if not see the potential, for more energy efficient means of generating them, but so far, none who have invested in the means have seen a profitable return in income). Scaling up the energy production has a directly proportional cost increase to generate that energy.

Economies of scale do not apply to LLM's.

At the heart of it, that's the biggest issue with them. Conventional successful cost-cutting business tactics don't apply.

So you get a service that delivers imperfect results, and to make the service profitable, you'd be paying more and more for it. The results are inherently going to be subject to inaccuracies just because that's how the system was trained and designed. As impressive as we think they are, from a technological and business perspective, they're fiscal dead ends. Without constant VC infusions, they will go broke. You can't charge more than the market will bear, and the market won't bear much more for imperfect results.

I can't even say for sure if perfect results would be cost effective in the long run, but IMHO they'd have a better chance of making it so if that were the case. TBH, I see them lowering the cost per token energy-wise LONG before they start getting perfect results. But that's speculation.
 
Upvote
15 (17 / -2)

Erbium168

Ars Centurion
2,717
Subscriptor
When it does pop it's taking the rest of the economy down with it.
That I think is alarmist.
There's the real economy, and then there's the bubble economy.
I cannot help wondering if the reason people like Musk are so determined to interfere in politics is because they are aware that they are part of the bubble economy and it could go wrong so fast.

The fact is, people do not need AI, social media or even things like Starlink. They need food, water, clothes, shelter, land and water transport and a degree of energy. On top of that it is very desirable to have things like defence, police, utilities, medicine, education, long distance communication and air transport. After that comes entertainment.
If AI disappeared tomorrow along with Facebook, Instagram, TikTok, Twitter and a few others, the real economy would be barely affected. Except that the Internet would run faster and there would be some surplus generating capacity, which would be positives.

What it would do is cause some small and medium businesses to go out of business, and a number of pension funds would take a haircut along with some venture capitalists - but screw the latter anyway. Many of the people laid off of the small businesses would find employment as utilities and the like had to go back to employing more real people again.
 
Upvote
4 (9 / -5)

hillspuck

Ars Scholae Palatinae
2,179
That I think is alarmist.
There's the real economy, and then there's the bubble economy.
I cannot help wondering if the reason people like Musk are so determined to interfere in politics is because they are aware that they are part of the bubble economy and it could go wrong so fast.
I don't think (or hope) it'll tank the whole economy, but it probably will tank the stock market. Which isn't the economy, but more and more it's where people have their retirement these days.

It's not just the tech companies that are building AI that will suffer. Or even just them and the hardware companies like nvidia. AI is the horse so many companies have tied their growth/cost cutting plans in for the future. When that collapses, so will the (on paper) value of a lot of companies. I mean, it was all smoke and mirrors anyway, we all know that. But they wrote a big check and it's about to come time to cash it.
 
Upvote
14 (14 / 0)

GitM

Ars Praetorian
513
Caught the last half of the show. Better than the last one of these I watched. If I have time, I'll go back for the first half. (Although to hear Ed, I had to put my speaker right up to my ear.)

The only way the bubble won't pop is if they actually manage to create an AGI and they reduce the cost of electricity to free. I don't foresee either of those things happening anytime soon.
 
Upvote
8 (8 / 0)

Sarty

Ars Tribunus Angusticlavius
7,853
I'd also argue that "smart-enough" isn't smart enough. It's more of a social expectation thing than anything else, but if one is going to consult the oracle, the oracle had better be perfect. Especially if you're paying through the nose for it. People EXPECT machine-like perfection from computers, since for the most part, that's what they typically get. LLM's OTOH, don't return perfect results RELIABLY ENOUGH to meet that expectation.
Lemme tweak this one a skooch. LLMs don't need to be perfect in order to be quite useful. In our messy, incomplete-information universe it can be hard to characterize what "perfect" even means (this is asymmetric--it's easy to identify an imperfect LLM). What LLMs need, and heretofore have not demonstrated, is some kind of liability framework. I need to be able to discipline, which probably means fire, an LLM. Otherwise it's just that earnest-but-useless first-day intern, eager to please but with not the first clue how, forever.

As the famous IBM slide deck warned us from so long ago,
A computer can never be held accountable;
Therefore a computer must never make a management decision.
 
Upvote
15 (15 / 0)

iollmann

Ars Scholae Palatinae
1,280
the economics dont make sense without some wildly reduced operational cost (training is one thing, but queries need to have far lower variable costs).

Given the insane markup on the NVidia devices, I should think the cost improvement is baked in. We just need to convince Jensen to stick to a more traditional 20% margin.
 
Upvote
3 (3 / 0)

HeadPlug

Ars Centurion
257
Subscriptor++
I think Ed was an interesting perspective to host, but I never felt like he truly engaged with the questions he was being asked - he was generally repeating himself a lot in the interview, with every answer seeming to circle back to 3 or 4 canned talking points.

They're not necessarily incorrect talking points, or bad talking points, but I never felt like the interview was thought-provoking or treading interesting ground - Ed thinks AI is overhyped, harmful, and useless; Benj kind of agrees, but wonders if this isn't too simplistic a line of thinking; Ed disagrees, and reiterates that AI is overhyped, harmful, and useless.
That's it.

I kind of wish Ed would explore the opposing point more, even if only to disprove it straight afterwards; that would've been more interesting and/or persuasive, imo
 
Upvote
15 (15 / 0)
Im not an expert coder my specialty is in forensics but, its not just building formulas, its getting and parsing the smart meter data, programmatically fetching all the plan data labels, parsing them, comparing specific time of use rates or perks like free weekends or free ev charging, this is all data it chug through easily it took a couple hours to tweak everything but it would have been beyond my ability to easily download hundreds of pdf's, visually parse the contents, sort all the data into excel, merge it with my meter, solar, and ev usage data. If you can do all that easily i respect you but it 100% lowered the barrier of entry for me and i still learned a ton with it guiding me and me giving iterative feedback until it all worked.
IMHO this is a symptom of a defective system that is too byzantine for simple humans to parse and not a show case for AI. Why it is this way, I will not try to reason here. Malice? Maybe. Incompetence? The weight of regulation and various legacies? I don't know. I think the system should be fixed, not another bandaid layer added.
 
Upvote
0 (0 / 0)

HydraShok

Ars Legatus Legionis
13,054
Subscriptor
I think it's clear that there is a future for LLMs and that even the major players in the industry are selling lies and fantasies. There is a bubble, the question is how big will the adjustment be when it pops, and what will AI look like after that happens. It's not going to go away, but it's also not going to get us a post-scarcity society.
It's the dotCom boom all over again. Any idea with any AI gets tons of money even if it has a nonexistent use case and no hope to profitability, just like any dotCom startup could do the same. When the dotCom bubble finally burst, it didn't take the internet with it -- but it did clear out quite a bit of junk. I suspect we'll see something similar this time around. And again with whatever comes next.
 
Upvote
7 (7 / 0)
Some personal prognostication about what's revealed in this, based on my own reading:

Q: Is actual business value being created?
A: MAYBE, but not at a level that can sustain or justify the resources and costs being thrown at it. And the "maybe" depends a lot on who you ask and tends to be more positive about that in direct proportion with their current investment in AI (so, PROBABLY not).

Q: IS AI generating returns?
A: None that cover operational expenses for ANYONE so far. And barring some unforeseen breakthrough in how AI is done, the outlook is that it never will because it CAN'T. AI doesn't have economies of scale like a traditional business does in that adding customers costs less per customer over time. The cost per customer (as measured by their individual inputs) is IDENTICAL each time. They have to reduce the overall operational costs to meet the demand before they can see any returns that will pay for the services, let alone make more than the services cost to deliver (AKA, make a profit). So far, operational costs haven't been lowered enough for what revenue is coming in to make up the difference between what's needed and what's being spent.

Q: Is AI hype peaking?
A: Personally, I think it's past frantic and bordering on hysterical, actually. They're desperate to keep the VC money going because once that dries up, AI collapses, and takes a trillion and a half dollars (estimated based on the assessment that HALF of the $3 trillion economic growth in the last 5 years has all been investment in AI) with it.

What happens with all of what remains after that implosion is anyone's guess, but pennies, if that much, on the dollar are likely since almost all of that power generation (the only capital investments that MIGHT find a place in the aftermath) was designed to power data centers, not grids, and more money will be needed to make those connections and administer the power centers. And we probably don't want to add that to the grid, because it's NOT generally clean energy.

So, it's very likely that the vast majority, if not all of, the energy generating gear built by these companies will be scrapped, because overall energy use has been covered by cleaner options. And if you dumped that on the market, the energy market would TANK, which wouldn't be good for anyone.

It's very likely gonna get ugly for Altman and Co.
I agree on every point here except the large surplus of power generation being inherently a bad thing. Lots of industries (not IT related) may benefit a lot from cheap power, increasing their competitiveness. Also, there is correlation between living standards and cheap energy availability for the general population.
 
Upvote
3 (4 / -1)

ashypans

Wise, Aged Ars Veteran
101
Subscriptor
That I think is alarmist.
There's the real economy, and then there's the bubble economy.
I hope you are right but with every passing day I lean further towards thinking it is a more pragmatic take then unfounded. I am not so sure the bubble economy and "real" economy are as separated as you believe.
Sure there are the tangible elements of an economy, the bubble popping won't change how much soy you could theoretically grow in a field. But a large part of the "real" economy is feelings too. The production potential of a field doesn't matter if you can't secure the credit for the seeds because your financial institution "feels" they can no longer shoulder that risk now that they've seen their holdings decimated through over exposure to the US tech market.
The US is in a weird spot where considerable swaths of the real economy are balancing on the knifes edge of solvency but where the GDP figures look pretty swell thanks to being buoyed by the major boom in tech investment. Take that away and the reality that we are forced to come back to could be pretty grim.
 
Upvote
15 (15 / 0)

Erbium168

Ars Centurion
2,717
Subscriptor
I hope you are right but with every passing day I lean further towards thinking it is a more pragmatic take then unfounded. I am not so sure the bubble economy and "real" economy are as separated as you believe.
Sure there are the tangible elements of an economy, the bubble popping won't change how much soy you could theoretically grow in a field. But a large part of the "real" economy is feelings too. The production potential of a field doesn't matter if you can't secure the credit for the seeds because your financial institution "feels" they can no longer shoulder that risk now that they've seen their holdings decimated through over exposure to the US tech market.
The US is in a weird spot where considerable swaths of the real economy are balancing on the knifes edge of solvency but where the GDP figures look pretty swell thanks to being buoyed by the major boom in tech investment. Take that away and the reality that we are forced to come back to could be pretty grim.
US GDP is about $T30. Total market cap about $T62
Tech sector market cap (which as you say is entirely based on feelings) $T26
Tech sector turnover $T3.5

So apparently over 40% of the entire US market cap is dependent on tech stocks which together represent only 12% of GDP.

That looks like a bubble to me.
But a lot of the tech sector is not bubble. There is real stuff out there doing important jobs. I suspect that LLM AI is only around 6% of GDP.
The 2008 crash was something like a what, 8% economic hit? I suspect an AI crash would be less than that. And there could even be upsides; if data centres actually shut down, there would be generation capacity surplus which could mean a reduction in coal and oil burning as older, less efficient plants are retired.
 
Upvote
11 (11 / 0)

ashypans

Wise, Aged Ars Veteran
101
Subscriptor
US GDP is about $T30. Total market cap about $T62
Tech sector market cap (which as you say is entirely based on feelings) $T26
Tech sector turnover $T3.5

So apparently over 40% of the entire US market cap is dependent on tech stocks which together represent only 12% of GDP.

That looks like a bubble to me.
But a lot of the tech sector is not bubble. There is real stuff out there doing important jobs. I suspect that LLM AI is only around 6% of GDP.
The 2008 crash was something like a what, 8% economic hit? I suspect an AI crash would be less than that. And there could even be upsides; if data centres actually shut down, there would be generation capacity surplus which could mean a reduction in coal and oil burning as older, less efficient plants are retired.
Sure, there will be upsides. Heck, if the capital market isn't hit too hard it may even free up investment for more productive ventures. But I do see a lot of financial institutions being heavily enough invested that I expect ripples probably even disruption outside of the tech sector. Your numbers aren't far off from what I had in mind either. When I say decimated I mean it in the traditional sense of " losing one tenth".I guess its just when I look outside of the tech space I don't see very much margin left for new ripples and disruptions.
 
Upvote
8 (8 / 0)

Stuart Frasier

Ars Tribunus Angusticlavius
6,480
Subscriptor
Sure, there will be upsides. Heck, if the capital market isn't hit too hard it may even free up investment for more productive ventures. But I do see a lot of financial institutions being heavily enough invested that I expect ripples probably even disruption outside of the tech sector. Your numbers aren't far off from what I had in mind either. When I say decimated I mean it in the traditional sense of " losing one tenth".I guess its just when I look outside of the tech space I don't see very much margin left for new ripples and disruptions.
Part of the problem is that there is far too much money chasing too few productive investments. It’s yet another downside to inequality and the failure to tax the wealthy adequately.
 
Upvote
14 (15 / -1)