The Generative AI Bubble Is really Going to Pop - Part Deux

tb12939

Ars Tribunus Militum
2,013
And use what.
There's not a lot of options right now - hence the 'back away slowly' part.

It's not the European tech sector's fault - If >90% of the potential revenue goes to foreign companies, don't be surprised when you don't have a local option when you finally realise that's an enormous strategic mistake.

But arguing that this commonly realised error is going to be repeated in an adjacent sector because nobody actually realised it's an error - eh no.
 

hanser

Ars Legatus Legionis
43,061
Subscriptor++
There's not a lot of options right now - hence the 'back away slowly' part.

It's not the European tech sector's fault - If >90% of the potential revenue goes to foreign companies, don't be surprised when you don't have a local option when you finally realise that's an enormous strategic mistake.

But arguing that this commonly realised error is going to be repeated in an adjacent sector because nobody actually realised it's an error - eh no.
Defense comes to mind. :scared:

I actually think "values-aligned supply chains" are more of a strategic priority now than they were before covid, generally speaking. This has significant implications for the private sector which provides most of those things.

That said, Mistral is European, and they dropped a new OSS model yesterday(?) that's pretty good.
 

w00key

Ars Tribunus Angusticlavius
8,982
Subscriptor
Defense comes to mind. :scared:

I actually think "values-aligned supply chains" are more of a strategic priority now than they were before covid, generally speaking. This has significant implications for the private sector which provides most of those things.

That said, Mistral is European, and they dropped a new OSS model yesterday(?) that's pretty good.
We used to think others would play nice too. I mean, Europe is a pretty diverse place and we all get along pretty okay. Never thought US would be the troublemaker, UK and US have (had?) a pretty high standing.

LLM models is small fry compared to F-35s not working when you need them; they require the US run cloud to upload mission packages. Gizmodo: The Pentagon Denies the F-35 Has a Kill Switch, but Its Software Demands Amount to the Same Thing


But yeah, that cat is out of the box now. Now we prioritize local spending, Lidl cloud scored a few big government contracts supplying locally sourced cloud compute, some PV panel bids required locally produced panels. Just like agricultural policy, you can't depend on the rest of the world for all your critical resources. Diversify, even if it costs a bit more.
 

Technarch

Ars Legatus Legionis
15,335
Subscriptor
That's why you run the models on your own walled garden.

Lots of enterprises in Europe already trust Amazon AWS, Google Cloud or Microsoft Azure with ALL their data and compute.

They have little choice. I can't imagine what it would take to build that level of cloud computing provider from scratch. AWS and Google Cloud started out renting spare compute from infrastructure they already had and I doubt Azure started from nothing either. Building that from the ground up would be nigh on impossible even without the current run on chips.
 

w00key

Ars Tribunus Angusticlavius
8,982
Subscriptor
They have little choice. I can't imagine what it would take to build that level of cloud computing provider from scratch. AWS and Google Cloud started out renting spare compute from infrastructure they already had and I doubt Azure started from nothing either. Building that from the ground up would be nigh on impossible even without the current run on chips.
But why "that level of cloud computing"?

No one starts out at AWS scale. Even AWS was just a bunch of VMs at the beginning.

Europe has a few decent hosters, Hetzner is the king of value, OVH is bigger with cloud services like managed databases and kubernetes, and Leaseweb focuses on dedicated and high bandwidth (10 gbps unmetered for €1000). These are also the global top 3 for dedicated servers. Then there are 999 other players at VMs and rent a rack, bring your own servers scale.

Okay, none of them let you run hosted message queues, global databases like Spanner, but I have a feeling most orgs just run dumb shit on AWS and don't use managed services for everything - for a simple reason, $.


There's also another player that followed Amazon's path, retailer to cloud host. Lidl, the supermarket, is now a serious alternative to AWS. https://stackit.com/en/news/stackit-becomes-the-dutch-government-s-official-cloud-alternative

Not cheap at all but possibly a better service than the three mentioned above - they are all in the bargain, value for money tier, not premium value added tier of hosters.
 
  • Like
Reactions: Technarch

ramases

Ars Tribunus Angusticlavius
8,703
Subscriptor++
But why "that level of cloud computing"?

No one starts out at AWS scale. Even AWS was just a bunch of VMs at the beginning.

Europe has a few decent hosters, Hetzner is the king of value, OVH is bigger with cloud services like managed databases and kubernetes, and Leaseweb focuses on dedicated and high bandwidth (10 gbps unmetered for €1000). These are also the global top 3 for dedicated servers. Then there are 999 other players at VMs and rent a rack, bring your own servers scale.

Okay, none of them let you run hosted message queues, global databases like Spanner, but I have a feeling most orgs just run dumb shit on AWS and don't use managed services for everything - for a simple reason, $.


There's also another player that followed Amazon's path, retailer to cloud host. Lidl, the supermarket, is now a serious alternative to AWS. https://stackit.com/en/news/stackit-becomes-the-dutch-government-s-official-cloud-alternative

Not cheap at all but possibly a better service than the three mentioned above - they are all in the bargain, value for money tier, not premium value added tier of hosters.

There's a decent argument to be made that unless you absolutely, positively need a proprierary service like Spanner, and there's no reasonable design-level accommodation that allows you to do without it at reasonable cost, then you need to apply a bit of proper supply chain/vendor management and try to not build your own business on something that can only be bought from a single vendor.

You should always try to commoditize your inputs. (and unless your output is already commoditized, try to prevent being commoditized yourself; which of course means your supplier will try to make you work for it, because just like you they find commiditized inputs great but think far less highly about being commditized in turn)

Most companies AWS workloads ought to be commoditizable. This is where investing a bit of money in getting good systems design (pardon me, sw architecture) folks can save you a lot of money in the long term; but of course so far many companies don't look at it that way, and instead focus too much directly on the costs of creating software than operating software.

If you've done that you should have the selection between quite a lot of different providers.

If you haven't done that, well. Amazon as a marketplace runs on 'your margin is my business opportunity'. There's a corollary to that: A lot of their margin is a tax levvied on those that make short-sighted system design decisions in their IT systems.
 
Last edited:
  • Like
Reactions: Pino90

MilleniX

Ars Tribunus Angusticlavius
7,839
Subscriptor++
No one starts out at AWS scale. Even AWS was just a bunch of VMs at the beginning.
AWS absolutely started at "AWS scale". It came about because Amazon operated infrastructure scaled to serve the holiday-season crush with the usual levels of reliability, throughput, and latency. This meant that most of the year, they had several times the necessary capacity to operate their global storefront. That's the capacity they were renting out - multiples of Amazon.com's year-round average footprint. It was not just spooling up some extra servers on the margins, it was monetizing the otherwise massively-underutilized capital assets and operating expenses.
 

w00key

Ars Tribunus Angusticlavius
8,982
Subscriptor
AWS absolutely started at "AWS scale". It came about because Amazon operated infrastructure scaled to serve the holiday-season crush with the usual levels of reliability, throughput, and latency. This meant that most of the year, they had several times the necessary capacity to operate their global storefront. That's the capacity they were renting out - multiples of Amazon.com's year-round average footprint. It was not just spooling up some extra servers on the margins, it was monetizing the otherwise massively-underutilized capital assets and operating expenses.

AWS never released official numbers but EC2 started with m1.small only and estimated to have a few thousand physical hosts.


That's what I mean with not everyone needs to start at "current AWS" scale. AWS itself didn't start out huge. You don't need to launch with a million VMs and tons of regions, just Europe and one AZ in each of Frankfurt, Amsterdam and Paris is enough and there is plenty of dark fiber between the locations.


OVH runs over half a million physical servers now, and countless VMs in their cloud product. They are already way past "early AWS" in size.
 

Technarch

Ars Legatus Legionis
15,335
Subscriptor
OVH runs over half a million physical servers now, and countless VMs in their cloud product. They are already way past "early AWS" in size.

True, but now it's also manageability features as well as size. If you don't mind vendor lock-in you'd be way better off running your model on a Sagemaker endpoint. If you do mind vendor lock-in then you'd want to containerize your model but even then you'd want the management features of GKE or EKS. I'm sure OVH isn't just offering pure VMs and nothing else as if we were still cavemen, but there's more to cloud than just scale.
 

w00key

Ars Tribunus Angusticlavius
8,982
Subscriptor
True, but now it's also manageability features as well as size. If you don't mind vendor lock-in you'd be way better off running your model on a Sagemaker endpoint. If you do mind vendor lock-in then you'd want to containerize your model but even then you'd want the management features of GKE or EKS. I'm sure OVH isn't just offering pure VMs and nothing else as if we were still cavemen, but there's more to cloud than just scale.

They offer the standard set of services, managed k8s, object store, virtual networks etc. It's no longer "here is your public IP and root login, glhf".

And also MySQL, Postgresql, MongoDB, Valkey, Kafka, Clickhouse, if you don't want to deploy your own db instances. apt install postgresql-server is easy, but HA and failover is harder, and that's the part you pay them for. They list all the scenarios and for all nodes died (primary + replicas), you get:

RPO: approx. 5 minutes or 1 WAL file. RTO: multiple hours (time to restore your backup)

Which is as good as you can get when primary + 2 read replicas all died at the same time.


AWS and Google Cloud do offer a marketplace with paid, licensed images, and AFAIK others often don't. Like when you need a Juniper vMX virtual router or Cisco vASA to test against, it's better to just rent it for a day on GCP/AWS and not jump through hoops to purchase a license ($$$) and maintenance for it.


That is the last remaining gap for me in practice between the big 3 and the rest. It pretty much reached feature parity on the simpler services a long time ago.
 
  • Like
Reactions: Technarch

Skoop

Ars Legatus Legionis
33,302
Moderator
None of these posts about how the various services are currently set up and operate have any included or implied expression as to whether they, as AI, will pop in a bubble.

It's interesting and informative, but it's really pretty general discussion. I suspect that a lot of this would fit better in the programming discussion in the other forum; it seems to be sliding off of the specific topic of this thread.

tl;dr: it is more about computing and software than it is about BR business.
 

Xenocrates

Ars Tribunus Militum
2,485
Subscriptor++
So, with recent trends, where everything from Claude Code to 3d model generators are asking for more credits, or making requests more expensive, everything with subscription access is raising prices or removing features, and some services like Sora are shutting down because of how badly they bleed cash.

Can we please admit the claims that inference costs would come down, and that these companies would be viable in the market if only they stopped doing R&D or expanding, are likely spurious?

Hell, I've even seen some LinkedIn isms that claim people are hiring juniors again because token costs for simple tasks were becoming absurd, not that I trust those anymore than I trust the LinkedIn garbage about AI making everyone a 10x developer. But it shows a stark shift in attitudes from the "compute too cheap to meter" slop that went with initial subscription and free tier releases.
 

hanser

Ars Legatus Legionis
43,061
Subscriptor++
I think it's a sign that demand is outstripping supply, so of course prices will go up while supply adjusts, and that adjustment has very long lags, because making CPUs and GPUs are the most complex, capital-intensive manufacturing processes the human race has ever created.

Long-term, I think costs will trend down like every other technological marvel that exists in mostly-free markets. I think over the next 1-3 years, prices will trend up. On the 3-5 year horizon, they will trend down, barring some exogenous macro event occuring.
 

Xenocrates

Ars Tribunus Militum
2,485
Subscriptor++
With OpenAI having to ask for government guarantees on their buildout, and a ton of other people missing financing targets and build out timelines, I don't think there is a long term, at least not at anything like this scale, especially with energy markets blown up by -INSERT SOAP BOX RANT-. Energy is getting more expensive, compute per watt had already plateaued and the gains AI is getting is through either algorithm improvements or making bespoke, low precision hardware that can't meaningfully be used for anything except the slop producers.

Retail customer sentiment is turning against it. So, while I think there will some enterprise market, I think the loss-leaders to push AI everywhere actually ended up being a foot-gun where the market leaders oversaturated the space and have poisoned the well for their successors, and have massively over-promised for most verticals. AI art is now a sign of cheap tat and scams, the same way render only HW projects were not too long ago. AI news is fake bullshit that says not to trust anything from the source. AI created apps are getting a reputation for being made by extremely shady devs like the NP++ debacle.

AI is the new asbestos. A "wonder" that turns into a horror story once it's time to clean up, but no one in the industry wants to admit responsibility for the cleanup costs, and the few legitimate uses of it will need to be sharply constrained by guardrails because of how irresponsibly it was pushed.
 

w00key

Ars Tribunus Angusticlavius
8,982
Subscriptor
INSERT SOAP BOX RANT
Pretty much the whole reply was that lol.

---

Can we please admit the claims that inference costs would come down, and that these companies would be viable in the market if only they stopped doing R&D or expanding, are likely spurious?

Two words, Pareto Frontier.

If you kept up with the releases you would have seen that Gemma 4 - 31b, scores in general chat Q&A, higher than Opus 4.1, a huge and expensive model but obviously older than one freshly zipped up and offered as a download.

So at constant performance, yeah inference cost is dropping crazy fast, you have a model that scores higher than Opus ~10 months ago that you can download and run on a Mac with enough memory, or an old A/H100 card quickly serving many concurrent requests.


One day people will stop chasing the latest greatest, GPT 5.5 / Opus 4.7 / Mythos is expensive yes, but the next Gemini Flash, Claude Sonnet, GPT mini will be distilled from these and be nearly as capable at a fraction of the price.


DeepSeek 4 also debuted new tech to reduce long context memory usage by a ton, offering 1M context length as a first for open models and people already have that working in their open source runtimes.
 
  • Like
Reactions: Pino90

hanser

Ars Legatus Legionis
43,061
Subscriptor++
^^ I think that’s definitely true. I’ve been spending most of my spare energy trying to move/adjust our tooling layer to be usable by Claude Desktop on Windows, but I 100% think that our non-tech people will be satisfied with a Sonnet-level model for the overwhelming majority of their doing-work activities. I know Sonnet handles almost all of my administrivia (email, documentation, etc) at this point with very little needed in the way of steering/adjusting.

Which means in 6-12 months the non-dev folks will be able to get away with Kimi, Mistral, etc OSS models. The tooling surrounding those models may or may not be there tho. Right now Claude Desktop supports routing to OSS models, but I could see them removing that, once compute is less constrained. And I don’t think any of the OSS companies are going to make the inroads into Office the way Anthropic has. So they’ll keep the enterprise market.
 
Retail customer sentiment is turning against it.
Backlash against LLM AI is a very real risk. AI slop, AI hallucinations abound in every AI result. Will it always be this way? Very possibly. As execs continue to believe the hype and cram AI down everyone's throat, we will see rising resentment. Can the companies deliver "good enough" results before the hype backlash replaces AI mania? Stay tuned, folks!
 

sakete

Ars Scholae Palatinae
1,059
Subscriptor++
Backlash against LLM AI is a very real risk. AI slop, AI hallucinations abound in every AI result. Will it always be this way? Very possibly. As execs continue to believe the hype and cram AI down everyone's throat, we will see rising resentment. Can the companies deliver "good enough" results before the hype backlash replaces AI mania? Stay tuned, folks!
Social media is littered with AI slop these days and it’s very easily recognizable. I’m certainly completely over AI generated content.

For real productivity work it’s useful, but it’s being overused currently and sentiment wise as you pointed out the pendulum is starting to swing back.
 
Everyone said the internet (but they meant the WWW) would dramatically transform everything, but especially commerce, and it did... eventually....
An AI bubble pop and ongoing AI use is not mutually exclusive. Hanser's use alone will probably prop up Anthropic... :cool:
 

Exordium01

Ars Praefectus
4,323
Subscriptor
Everyone said the internet (but they meant the WWW) would dramatically transform everything, but especially commerce, and it did... eventually....
Saying that there’s a bubble is not the same as saying that there is no value in LLMs but it has been interpreted that way in this thread.

When you have Sam Altman out there stumping for the socialization of the financial risks in their business plans and the disgraced newly ex-CEO of Intel lighting hundreds of millions of dollars in cash on fire to assess “the Christian values” of LLMs, I’m not really sure how you can claim that there isn’t a bubble.
 

flere-imsaho

Ars Tribunus Angusticlavius
9,933
Subscriptor
I’m not really sure how you can claim that there isn’t a bubble.

Where am I claiming that there isn't a bubble?

My point was that even though the dot-com boom popped, the internet did eventually transform a lot of stuff. I suspect AI (even with a bubble pop) will be similar.
 

Ecmaster76

Ars Legatus Legionis
17,067
Subscriptor
Banks seem to be signaling a lack of confidence
Banks seek to offload risk to avoid ‘choking’ on data centre debt (FT, paywalled)
https://archive.is/GoRyM
Lenders are exploring private deals to sell stakes in the debt as well as so-called risk transfers to reduce exposure to big borrowers and free up capacity for more lending.
The efforts showcase the unprecedented scale of borrowing that underpins the AI sector and the pressure it is putting on lenders. Oracle and CoreWeave, two data centre operators, have borrowed hundreds of billions to build sites across the US for AI labs.
“The sizes we’re talking about . . . they’re out of scale to anything we’ve thought about, ever,” said Matthew Moniot, co-head of credit risk sharing at Man Group. “Banks very quickly start choking.”
Lenders, including JPMorgan and MUFG, have spent more than six months distributing $38bn of construction debt tied to a data centre project leased to Oracle in Texas and Wisconsin, people familiar with the matter said.
Some banks sought to sell the loans at a discount to non-bank lenders to offload the Oracle-linked debt, the people said.

Granted a lot of that seems pointed at Oracle in particular, but it seems like they are realizing a broader overexposure to a single, highly speculative industry.
 
  • Like
Reactions: AndrewZ

Exordium01

Ars Praefectus
4,323
Subscriptor
Where am I claiming that there isn't a bubble?

My point was that even though the dot-com boom popped, the internet did eventually transform a lot of stuff. I suspect AI (even with a bubble pop) will be similar.
Apologies. It wasn’t meant as a specific you. I know you weren’t. It was a general you for people participating in this thread claiming there is no bubble. One would have been a better word to use than you.
 
  • Like
Reactions: flere-imsaho

w00key

Ars Tribunus Angusticlavius
8,982
Subscriptor
Yep whoever financed this round of crazy build out is probably safe but next is questionable. There is only so many tokens you can sell, we'll possibly run out of command line tools to build before the next batch of datacenters are done and hooked up to the grid.

SpaceX leasing half of their GPUs to Anthropic is hilarious - demand for Grok must be crazy low and they can't even queue up enough training and research jobs on the cluster to fill it.


Next year may be worse. The next medium model will be as good as current big ones, then suddenly you only need half the GPUs to serve the same traffic. Then repeat a few more times and we might get proper smart on device or in home AI.
 

w00key

Ars Tribunus Angusticlavius
8,982
Subscriptor
Will that mean we can afford a consumer GPU at that point? Asking for a friend.
Nope. It will take another 2 or 3 years.

Apple has the right idea, inference at the edge, on device, privacy first. But they tried before the models are ready.


Gemma 4 E4B could do it now, and at 4 bit precision, that's one 2GB when phones have up to 16 available. In a year or two it will be silly powerful and we might not need those oversized models anymore for most things; I mean, how hard is it to find my appointments for the week, or skim my inbox for anything important? You don't need "graduate level math" model for that.
 
  • Like
Reactions: hanser

sakete

Ars Scholae Palatinae
1,059
Subscriptor++
Yep whoever financed this round of crazy build out is probably safe but next is questionable. There is only so many tokens you can sell, we'll possibly run out of command line tools to build before the next batch of datacenters are done and hooked up to the grid.

SpaceX leasing half of their GPUs to Anthropic is hilarious - demand for Grok must be crazy low and they can't even queue up enough training and research jobs on the cluster to fill it.


Next year may be worse. The next medium model will be as good as current big ones, then suddenly you only need half the GPUs to serve the same traffic. Then repeat a few more times and we might get proper smart on device or in home AI.
Kinda like Moore’s Law - we’ll call it W00key’s Law.
 

Exordium01

Ars Praefectus
4,323
Subscriptor
Yep whoever financed this round of crazy build out is probably safe but next is questionable. There is only so many tokens you can sell, we'll possibly run out of command line tools to build before the next batch of datacenters are done and hooked up to the grid.
This is where I disagree. I think Oracle and OpenAI have written checks they can’t cash this round.

Everybody else will be able to weather it. NVidia and Microsoft will look particularly bad when their “investments” evaporate and revenue dips proportionally to their circular spending. Everyone else will see PE corrections but that doesn’t impact their businesses.
 

w00key

Ars Tribunus Angusticlavius
8,982
Subscriptor
This is where I disagree. I think Oracle and OpenAI have written checks they can’t cash this round.

Everybody else will be able to weather it. NVidia and Microsoft will look particularly bad when their “investments” evaporate and revenue dips proportionally to their circular spending. Everyone else will see PE corrections but that doesn’t impact their businesses.
But most of OpenAI's plans are nothing more than paperwork, not firm commitments. But yes, of all the firms, they are the most yolo.

Oracle though seems stretched, same with CoreWeave, these two can be the start of the deflation of the bubble.

Microsoft will be fine, they also own a slice of Anthropic I think, $5B investment in Nov 25. And Anthropic will rent whatever capacity Azure has for the next few years.
 

Vince-RA

Ars Praefectus
5,324
Subscriptor++
I think Oracle and OpenAI have written checks they can’t cash this round
Funny, I was discussing the prospect of this with a friend over lunch this week. Oracle collapses under the weight of its crazy debt-fueled aspirations, Broadcom buys the dregs, and then becomes The Shittiest Company That Has Ever Existed(tm)!
 

Auguste_Fivaz

Ars Praefectus
5,868
Subscriptor++
Funny, I was discussing the prospect of this with a friend over lunch this week. Oracle collapses under the weight of its crazy debt-fueled aspirations, Broadcom buys the dregs, and then becomes The Shittiest Company That Has Ever Existed(tm)!
This new company would be a Wall Street darling for a while. Buy low ...
 

Shavano

Ars Legatus Legionis
69,071
Subscriptor
AI is the new asbestos. A "wonder" that turns into a horror story once it's time to clean up, but no one in the industry wants to admit responsibility for the cleanup costs, and the few legitimate uses of it will need to be sharply constrained by guardrails because of how irresponsibly it was pushed.
What makes you think anyone has any interest in cleaning up the mess?
 
  • Like
Reactions: concernUrsus
^^ I think that’s definitely true. I’ve been spending most of my spare energy trying to move/adjust our tooling layer to be usable by Claude Desktop on Windows, but I 100% think that our non-tech people will be satisfied with a Sonnet-level model for the overwhelming majority of their doing-work activities. I know Sonnet handles almost all of my administrivia (email, documentation, etc) at this point with very little needed in the way of steering/adjusting.

Which means in 6-12 months the non-dev folks will be able to get away with Kimi, Mistral, etc OSS models. The tooling surrounding those models may or may not be there tho. Right now Claude Desktop supports routing to OSS models, but I could see them removing that, once compute is less constrained. And I don’t think any of the OSS companies are going to make the inroads into Office the way Anthropic has. So they’ll keep the enterprise market.

Putting together the right mix of models to automate your tasks is real.

AWS Bedrock can handle almost everything but they don't have OpenAI support, so I have to wire them in manually.

GPT 5.4 Mini is better than Sonnet, and Opus is crazy expensive.

Backlash against LLM AI is a very real risk. AI slop, AI hallucinations abound in every AI result. Will it always be this way? Very possibly. As execs continue to believe the hype and cram AI down everyone's throat, we will see rising resentment. Can the companies deliver "good enough" results before the hype backlash replaces AI mania? Stay tuned, folks!

There's an entire generation of computer users being raised right now that are LLM first, before Google, before reading, before any other kind of computer interaction.


We are the last pre-AI generation. After this, it will be Stark Trek's computer on your communicator for every single one. Then, after that the Orange Catholic Bible.
 

NervousEnergy

Ars Legatus Legionis
11,507
Subscriptor
We are the last pre-AI generation. After this, it will be Stark Trek's computer on your communicator for every single one. Then, after that the Orange Catholic Bible.
Thou shalt not make a machine in the likeness of a human mind. We need mentats, though, to make it stick.

Backlash against LLM AI is a very real risk. AI slop, AI hallucinations abound in every AI result. Will it always be this way? Very possibly. As execs continue to believe the hype and cram AI down everyone's throat, we will see rising resentment. Can the companies deliver "good enough" results before the hype backlash replaces AI mania? Stay tuned, folks!
The question (IMO) is speed of evolution, and the fairly large amount of low-productivity deadwood in companies today. I'll speak from some recent experience - there is a function in my department (a very large company - over 100K employees) that sends out questionnaires every year to application owners, vets the responses, and does some metrics around the results. There are other functions that take the database of those responses and runs control attestations on them, and does metrics around the results. Other functions audit those various metrics.

As a relative newcomer to the company, it's obvious to me that every person in every one of those functions could easily be architected out of a job by an AI. These aren't difficult tasks, they're structured and repetitive, and they don't have to be done perfectly to still be done better than the (still error prone) human analysts. Collectively, these folks probably make $1MM per year in salary, plus a 30% uplift for benefits. And this is one part of one department - less than a dozen people.

There used to be a T-shirt for sale that was quite popular some years ago and I wish I'd picked one up, and it said on the back "Go away or I will replace you with a very small shell script."

As far as the AI bubble popping - I've been on Warren Buffet's often-cited strategy for two decades - constantly put as much as you can into a simple, low-cost S&P 500 index fund, as the vast majority of 'active' investment managers consistently fail to beat the general market in returns. The run up in the S&P, though, has me second-guessing if I shouldn't get out and go money market. OTOH, I've stayed in without a single change during every market 'crisis' going back to 2008 (I was just starting to invest during the dot-com bust so it didn't affect me much), and regardless of how much I'd wished I'd 'gotten out' in retrospect after every crash, it's always come back stronger than ever. Conditions are scary, though. AI unsustainability along with the current energy shock and inflation seems to set the stage for a major correction.
 
Last edited:
  • Like
Reactions: hanser