The only thing that I would add to that excellent summary is that it shines a bright light on the single most important element that we can and must focus on: who the anger should be directed toward: the oligarchs who continue to put so much effort into pitting Americans against each other (and this most definitely applies to other countries as well).I agree with your points and this bit (my bolding for emphasis) probably explains a lot of broader social issues around the world.
If we're chasing growth at all costs but said growth is in tech bubbles and property rather than productive labour activity, then most of the population is not going to feel the benefit of it and is going to get a bit angry.
You know who made the real money in the US gold rush? The merchants. The people selling all those fools tools and food and jeans.
https://history.howstuffworks.com/american-history/gold-rush.htm
Seems familiar.
The market can remain irrational longer than you can remain solvent, as ever.Okay, your point?
How long did it take for the real estate bubble to pop in ‘07/‘08?
Oh, right, about 8 years.
I would like to also throw out the digressive and totally off topic hot take that Dr. X was the actual deuteragonist of The Diamond Age - not Hackworth - and is arguably the most heroic of its characters, don’t @ meHey, Dr. X did the best he could.
I would have agreed with that take until Dr. X’s last interaction with Hackworth, where he expresses regret for saving the girls.I would like to also throw out the digressive and totally off topic hot take that Dr. X was the actual deuteragonist of The Diamond Age - not Hackworth - and is arguably the most heroic of its characters, don’t @ me
To some extent. It's a little irresponsible to not vet the coding of the Primers (which meant that instead of brainwashing the orphan girls to serve the Celestial Kingdom, Hackworth was able to brainwash them into a Mouse Army to serve Nell).Hey, Dr. X did the best he could.
I don't think that's a correct way to characterize that interaction.I would have agreed with that take until Dr. X’s last interaction with Hackworth, where he expresses regret for saving the girls.
Dr. X considered it. "It would be more correct to say that, although it was virtuous to save them, it was mistaken to believe that they could be raised properly. We lacked the resources to raise them individually, and so we raised them with books. But the only proper way to raise a child is within a family. The Master could have told us as much, had we listened to his words."
I think many of you professional coders are forgetting that there are quite a few people out there (myself included) who never learned to code professionally. Yet we still occasionally need working code. It doesn't have to be elegant or safe, because it's just meant to run once or handle a one-off task. This applies to both personal and professional contexts and can genuinely add value to our lives and work.My issue is that it's not a great sounding board. Using wikipedia's definition:
ChatGPT provides loads of comments. Comments that are often plain wrong and I have to figure that out the hard way if I'm trying to use it to write something new. (If I'm writing something old, I just use my own brain with a few web searches to refresh myself on syntax or specific library function names.) And then if you tell it it's wrong, or just ask it if it's sure, it's a yes man that will tell you oh great sir, I apologize and you are correct, and here's what I should have said instead. It will tell you that if you are wrong and it was right to begin with.
At least, that's my experience with it as someone who has been coding for a living for 30 years. I don't hate the idea of it (beyond the ethical problems behind training on data without permission), but I do just find it to be an idiot. And you know, sometimes other human programmers are idiots as well. But I don't need to go seek that kind of thing out to contribute to how I write my code.
It is providing extreme value in research and development. If used for their intended purpose AI is an extremely powerful tool. There is this strange and exclusive focus here on LLM chatbots that indeed have questionable use cases. But if you know what you are doing, and don't try to use a LLM as a calculator, it is very exciting what is happening. Scientific progress is already accelerating because of AI models. There are countless instances in the life sciences, mathematics, physics, material sciences, medicine etc. where AI has helped progress. And it is just 3 years since Chatgpt launched.When will the bubble pop is a valid question, as all bubbles do. But a better question to me is what does AI actually do?
Apparently, it can create code pretty well. We can already do that, but the machine can do it cheaper. Fine. I suppose AI could design and fabricate and build, say, cars, but we have had machines doing these things for 50 years. Or AI could be a chef, and we could eliminate the kitchen staff and save money. Again, we have had machines for 50 years that are able to do that, but they don't bring any creativity to it. They follow the recipe, the programming. It would be like Applebee's. Which isn't necessarily bad, but are they going to spend twenty billion dollars to make the food that the Applebee's staff is already making? Probably not.
I guess AI could do the research and writing to get me through high school and college. Maybe. But I had Earth Science in 7th grade. If I had ChatGPT write my term paper, it would probably insist that the dinosaurs created airplanes in the Carboniferous Era. So I might have to repeat 7th grade. Since AI likes to just make shit up, it's not any good for anything that is fact based.
So what CAN AI do? If you need a fake friend, AI can be that for you. It may eventually tell you to take your own life, so YMMV. Outside of that, I'm really not sure. AI does few tasks well, but the tasks it can do we already have much cheaper machines, or people, that do the job as well. So you have to ask yourself how much money do you want to spend to get what you have now, only seem cooler? Does cool have a monetary value? No, no it does not.
AI looks a LOT like the Magic Blood Machine that Elizabeth Holmes is in prison for right now. In the same way that Holmes promised her machine would change the world, the AI people say AI will change the world. But AI would fail Earth Science, claiming gasoline comes from dinosaurs. So the reality is, RIGHT NOW, AI is worthless. Could AI unify physics, and do away with bullshit such as dark matter and string theory? Perhaps. But right now it can't be trusted to tell you what the weather will be tomorrow. And yet, our economy is largely based on AI and Tailor Swift.
To paraphrase Spock, all bubbles burst. As will the AI bubble. And since so much of this money is coming from high profile people, if AI turns out out be a grift like the Magic Blood Machine, there will be hell to pay. Because in the United States, it's perfectly fine for monied people to steal from poor people. But when less monied people steal from the wealthy, the less monied people go to fucking prison. And AI sure looks like a fucking grift to me.
Don't worry... our underperforming 401k's will provide the safety net for all the incredible wealthy to buy stocks with their liquid assets at basement pricing and reap even more wealth on the rebound.When it does pop it's taking the rest of the economy down with it.
The thing is, that (and image generating tools like Stable Diffusion) are basically all that's being sold to consumers; they're what's being shoved in our faces by operating systems and office suites and search engines and social media and customer support pages and and and...There is this strange and exclusive focus here on LLM chatbots that indeed have questionable use cases.
You're using as your baseline for where to measure progress for AI... not machine learning software in general, not even the GPT LLM itself, but the chatbot version of it.And it is just 3 years since Chatgpt launched.
I'm not really forgetting it. I'm just talking about using it for the non-basic things. I find it to generally work very well for grunt-work things like writing powershell scripts or some simple python code. I'm talking about using it to write code that wouldn't take me a small amount in knowledge checking and a larger amount of just time cranking it out. And where it reaches the complexity that trips up existing AI.I think many of you professional coders are forgetting that there are quite a few people out there (myself included) who never learned to code professionally. Yet we still occasionally need working code.
I'm not talking about being elegant or (mostly) safe. I'm even talking about working at all. It produces code that won't even compile.It doesn't have to be elegant or safe, because it's just meant to run once or handle a one-off task.
And this is why we're fucked. You're going to use it to do "basic data science taks" that "I can't manage on my own". So when it goes wrong in some subtle way, the equivalent to forgetting to use the right units, you're going to be metaphorically crashing your rocket. Not just you but all the people who use it to do things they can't do themselves. At least AI as it exists now.In my case, I frequently need some basic data science tasks done in either R or Python. It's never production-grade, but it's still complex enough that I can't manage it on my own (or would need to invest countless hours I don't have, since I'm a highly specialized professional in an entirely different field).
I'm honestly shocked that Ed is so lauded by the tech reporting industry after his disastrous interviews. He's an emotional wildcard, and his interviews and emotional reactions on This Week in Tech and On The Media should give serious pause to anyone considering him for a logical and reasoned approach.
I think he has a lot of fans simply because he is a grifter. Yeah, I know, he calls everyone he doesn't like a grifter. But his entire modern career is based off of affirming the anti-tech hysteria bias surrounding LLMs specifically, but AI in general. He's laid many bricks that have become this weird anti-AI-anti-Tech-as-a-peronality-trait thing. It's horrible on reddit, and I'm sad to see it happening on Ars of all places.
My wife is like a prosumer traveler. It’s not her job, but she loves to travel and is so good at putting together vacations. Friends ask for her help all the time for their travel plans. It just takes lots of online research and sketching out a plan of everything you want to see and do, once you find out what’s available. She’s especially good at the nature side of things. We once spent a week in Vegas with only one day on the strip and the rest hiking the amazing countryside in every direction. Who knew? She’s just as good at any spot in the world you can imagine. But you’re right, this is what travel agents are for.Twenty-six years. And you know what? I miss it. It's a huge pain in the ass to do all that stuff myself. And I'm probably missing a lot of deals and great opportunities due to lack of deep knowledge that can only exist when someone makes it their primary job.
Lucky you! And yeah, that's the beauty of travel agents doing it as a job. How many times do you go to Las Vegas in a year? Travel agents can spend even more of that precious time doing the research because they're doing it for several clients. And they can just refer back to it and update/expand it.My wife is like a prosumer traveler. It’s not her job, but she loves to travel and is so good at putting together vacations. Friends ask for her help all the time for their travel plans. It just takes lots of online research and sketching out a plan of everything you want to see and do, once you find out what’s available. She’s especially good at the nature side of things. We once spent a week in Vegas with only one day on the strip and the rest hiking the amazing countryside in every direction. Who knew? She’s just as good at any spot in the world you can imagine. But you’re right, this is what travel agents are for.
Sora 2 is good. But I saw mistakes and a good AI check will show this. Thus, I agree that these tools, once sharpened, will hone out our very existence. The Antichrist is here, and he's an AI.After seeing Sora 2, and seeing interviews with people like Larry Ellison, Theil etc. Plus the fact that we know Musk is stealing any personal info he can get from government systems. Im pretty sure the plan is to feed some LLM every last bit of information these people have on all of us, and then deliver us all some extremely personalized distortion of reality that keeps Heritage Foundation hand-selected CHUDs like Trump in power.
I really, really doubt there's more for Ed in being an AI contrarian than there is in being one of the very many breathless hype-men swooning over OpenAI. He could make a comfortable living doing that, and very probably a more comfortable living than doing what he's doing. I don't think making a living as a writer with a strong (and defensible!) viewpoint means his viewpoint is shallow or biased or irrational. And it's not clear to me that his view of the market isn't even-keeled, or less so than all the sycophants who breathlessly report on every promise and speculation as if they're prophecies.I wouldn't say EZ is a 'grifter' precisely. He's talented and funny and has a certain viewpoint that I find valuable, even though I often don't agree with him. However, your point that he is personally very invested in terms of both income as well as social cachet in AI "collapsing" is well made. He makes a living producing media for an audience that predominantly agrees with him and thus patr(e)onizes his output. Which is fine! He might even end up correct entirely, but I do think he trades to some extent in emotion and getting folks riled up, and I think he overstates his positions because that's what the audience wants. He's a fun follow and a fun read, but not where I'd go for deep technical analysis on trends, or an even-keeled view of the market.
Missing from where?What is missing, I think, is that there are a myriad of voices saying we're absolutely in an AI bubble, but demonstrating the nuance is that it's more like the .com bubble in 1999-2001.
So as far as AI goes, I'm not sure I see the distinction you're trying to draw here; the fundamentals are really shaky and every AI outfit is burning many billions per year with no end in sight, Sam Altman has said plainly and often that he expects the users to come up with the use case, the net value seems to accrue mostly to big investors and executives in the AI space, and investing this much in a technology that cannot scale without access to correspondingly vast amounts of power, GPUs, data centers, and investor capital sounds like a risk that makes subprime mortgages look like a great bet.As opposed to, say, the subprime mortgage bubble, or the crypto bubble(s) where there were actually basically zero fundamentals, no use cases, no net positive value from this stuff, and/or the risk was crazy high at every turn.
Machine learning and so on already demonstrated ROI and value. As far as generative models go? I think there is probably genuine value in smaller, device-hosted generative models, and in models specialized for coding and other specific tasks. It's not clear to me that the giant, power-sucking LLMs were a necessary intermediate step to get there, and it's not actually clear that "small language models" are really any better or more valuable or guaranteed to be successful than the larger ones.People are taking very big, often stupid, risks on AI/LLMs, betting the farm on replacing staff with largely unproven tech, etc. However, there's also a myriad of uses for both LLMs, and broader AI, that are already demonstrating genuine value/ROI/novelty.
The one thing they can't get is "energy efficient LLM's".The best future for AI is one where it's boring, ubiquitous and relatively useful. Grifters and thieves rely on hype, novelty, and unfamiliarity.
Smart-enough, energy-efficient LLMs without anyone pretending they'll evolve into machine gods - sounds nice.
That I think is alarmist.When it does pop it's taking the rest of the economy down with it.
I don't think (or hope) it'll tank the whole economy, but it probably will tank the stock market. Which isn't the economy, but more and more it's where people have their retirement these days.That I think is alarmist.
There's the real economy, and then there's the bubble economy.
I cannot help wondering if the reason people like Musk are so determined to interfere in politics is because they are aware that they are part of the bubble economy and it could go wrong so fast.
Lemme tweak this one a skooch. LLMs don't need to be perfect in order to be quite useful. In our messy, incomplete-information universe it can be hard to characterize what "perfect" even means (this is asymmetric--it's easy to identify an imperfect LLM). What LLMs need, and heretofore have not demonstrated, is some kind of liability framework. I need to be able to discipline, which probably means fire, an LLM. Otherwise it's just that earnest-but-useless first-day intern, eager to please but with not the first clue how, forever.I'd also argue that "smart-enough" isn't smart enough. It's more of a social expectation thing than anything else, but if one is going to consult the oracle, the oracle had better be perfect. Especially if you're paying through the nose for it. People EXPECT machine-like perfection from computers, since for the most part, that's what they typically get. LLM's OTOH, don't return perfect results RELIABLY ENOUGH to meet that expectation.
A computer can never be held accountable;
Therefore a computer must never make a management decision.
the economics dont make sense without some wildly reduced operational cost (training is one thing, but queries need to have far lower variable costs).
IMHO this is a symptom of a defective system that is too byzantine for simple humans to parse and not a show case for AI. Why it is this way, I will not try to reason here. Malice? Maybe. Incompetence? The weight of regulation and various legacies? I don't know. I think the system should be fixed, not another bandaid layer added.Im not an expert coder my specialty is in forensics but, its not just building formulas, its getting and parsing the smart meter data, programmatically fetching all the plan data labels, parsing them, comparing specific time of use rates or perks like free weekends or free ev charging, this is all data it chug through easily it took a couple hours to tweak everything but it would have been beyond my ability to easily download hundreds of pdf's, visually parse the contents, sort all the data into excel, merge it with my meter, solar, and ev usage data. If you can do all that easily i respect you but it 100% lowered the barrier of entry for me and i still learned a ton with it guiding me and me giving iterative feedback until it all worked.
It's the dotCom boom all over again. Any idea with any AI gets tons of money even if it has a nonexistent use case and no hope to profitability, just like any dotCom startup could do the same. When the dotCom bubble finally burst, it didn't take the internet with it -- but it did clear out quite a bit of junk. I suspect we'll see something similar this time around. And again with whatever comes next.I think it's clear that there is a future for LLMs and that even the major players in the industry are selling lies and fantasies. There is a bubble, the question is how big will the adjustment be when it pops, and what will AI look like after that happens. It's not going to go away, but it's also not going to get us a post-scarcity society.
I agree on every point here except the large surplus of power generation being inherently a bad thing. Lots of industries (not IT related) may benefit a lot from cheap power, increasing their competitiveness. Also, there is correlation between living standards and cheap energy availability for the general population.Some personal prognostication about what's revealed in this, based on my own reading:
Q: Is actual business value being created?
A: MAYBE, but not at a level that can sustain or justify the resources and costs being thrown at it. And the "maybe" depends a lot on who you ask and tends to be more positive about that in direct proportion with their current investment in AI (so, PROBABLY not).
Q: IS AI generating returns?
A: None that cover operational expenses for ANYONE so far. And barring some unforeseen breakthrough in how AI is done, the outlook is that it never will because it CAN'T. AI doesn't have economies of scale like a traditional business does in that adding customers costs less per customer over time. The cost per customer (as measured by their individual inputs) is IDENTICAL each time. They have to reduce the overall operational costs to meet the demand before they can see any returns that will pay for the services, let alone make more than the services cost to deliver (AKA, make a profit). So far, operational costs haven't been lowered enough for what revenue is coming in to make up the difference between what's needed and what's being spent.
Q: Is AI hype peaking?
A: Personally, I think it's past frantic and bordering on hysterical, actually. They're desperate to keep the VC money going because once that dries up, AI collapses, and takes a trillion and a half dollars (estimated based on the assessment that HALF of the $3 trillion economic growth in the last 5 years has all been investment in AI) with it.
What happens with all of what remains after that implosion is anyone's guess, but pennies, if that much, on the dollar are likely since almost all of that power generation (the only capital investments that MIGHT find a place in the aftermath) was designed to power data centers, not grids, and more money will be needed to make those connections and administer the power centers. And we probably don't want to add that to the grid, because it's NOT generally clean energy.
So, it's very likely that the vast majority, if not all of, the energy generating gear built by these companies will be scrapped, because overall energy use has been covered by cleaner options. And if you dumped that on the market, the energy market would TANK, which wouldn't be good for anyone.
It's very likely gonna get ugly for Altman and Co.
I hope you are right but with every passing day I lean further towards thinking it is a more pragmatic take then unfounded. I am not so sure the bubble economy and "real" economy are as separated as you believe.That I think is alarmist.
There's the real economy, and then there's the bubble economy.
US GDP is about $T30. Total market cap about $T62I hope you are right but with every passing day I lean further towards thinking it is a more pragmatic take then unfounded. I am not so sure the bubble economy and "real" economy are as separated as you believe.
Sure there are the tangible elements of an economy, the bubble popping won't change how much soy you could theoretically grow in a field. But a large part of the "real" economy is feelings too. The production potential of a field doesn't matter if you can't secure the credit for the seeds because your financial institution "feels" they can no longer shoulder that risk now that they've seen their holdings decimated through over exposure to the US tech market.
The US is in a weird spot where considerable swaths of the real economy are balancing on the knifes edge of solvency but where the GDP figures look pretty swell thanks to being buoyed by the major boom in tech investment. Take that away and the reality that we are forced to come back to could be pretty grim.
Sure, there will be upsides. Heck, if the capital market isn't hit too hard it may even free up investment for more productive ventures. But I do see a lot of financial institutions being heavily enough invested that I expect ripples probably even disruption outside of the tech sector. Your numbers aren't far off from what I had in mind either. When I say decimated I mean it in the traditional sense of " losing one tenth".I guess its just when I look outside of the tech space I don't see very much margin left for new ripples and disruptions.US GDP is about $T30. Total market cap about $T62
Tech sector market cap (which as you say is entirely based on feelings) $T26
Tech sector turnover $T3.5
So apparently over 40% of the entire US market cap is dependent on tech stocks which together represent only 12% of GDP.
That looks like a bubble to me.
But a lot of the tech sector is not bubble. There is real stuff out there doing important jobs. I suspect that LLM AI is only around 6% of GDP.
The 2008 crash was something like a what, 8% economic hit? I suspect an AI crash would be less than that. And there could even be upsides; if data centres actually shut down, there would be generation capacity surplus which could mean a reduction in coal and oil burning as older, less efficient plants are retired.
Part of the problem is that there is far too much money chasing too few productive investments. It’s yet another downside to inequality and the failure to tax the wealthy adequately.Sure, there will be upsides. Heck, if the capital market isn't hit too hard it may even free up investment for more productive ventures. But I do see a lot of financial institutions being heavily enough invested that I expect ripples probably even disruption outside of the tech sector. Your numbers aren't far off from what I had in mind either. When I say decimated I mean it in the traditional sense of " losing one tenth".I guess its just when I look outside of the tech space I don't see very much margin left for new ripples and disruptions.