ISPs would LOVE the inexhaustible slush fund they'd be able to charge to build it. So you're only further cementing reasons why they'd want this....wouldn't that require a tremendous amount of investment in telecom infrastructure to handle everyone streaming everything all the time? (not to mention people stuck on rural DSL or satellite with limited bandwidth and/or terrible latency)
if this RAM cartel nonsense keeps up, hopefully someone will come up with a way to import grey market Chinese RAM once they get some more fabs spun up
I'd bet good money they still won't build it though.ISPs would LOVE the inexhaustible slush fund they'd be able to charge to build it. So you're only further cementing reasons why they'd want this....
Yeah a VUI (instead of GUI). I've been learning proxmox, which has ment: google command, enter command, cross fingers. (It's called a homelab because it's an experiment.)Gotta admit, the ultimate use of AI for me would be better voice-to-computer interfaces. There's a few other uses, but that's the main one.
They seem to get used for everything but that.
Linux Ubuntu on a 6 year old machine with 64 gigs.* Windows 7 Pro for a very long time then Linux for me. Tried to like 10 and 11 but, nope. AI diligently filtered out but it's like crabgrass, always some somewhere.You could just use a Linux Distro for that, I am running Linux on a 17 year old machine with just 4 GB of ram and it works fine for what I use it for.
AFAIK they didn't buy they signed letters of intent, basically stating we want to do this sometime in the future at this price. OpenAI doesn't have the billions needed to buy 900k wafers a month. The rest is the market panicking.When Robert R. Taylor bought the entire supply of hand pumps it was not because he needed that much supply for his product, it was in order to maintain exclusivity in the liquid soap market. He knew that if he bought the entire supply, his competitors would be hamstrung getting competing products to market, giving him an advantage.
I sincerely doubt that OpenAI needs the 40% of global DRAM wafers (not chips, packages, modules, or cards) that it has bought.
Sure. Just as slowly and expensively as they can possibly make it. I mean, why charge once when you can just keep charging to build the same infrastructure over and over? Well, I mean, eventually they deliver, but only enough to keep the money flowing without ever realistically finishing.I'd bet good money they still won't build it though.
Apple still uses external DRAM suppliers (Samsung, SK Hynix and Micron). The DRAM dies are stacked onto the Apple silicon during manufacture.
So Apple is as vulnerable as anyone else. However, it's possible they have long-term supply arrangements in place that could insulate them from the immediate pricing shock.
The increased costs have implications for the concept of “AI PCs,” a term OEMs have used consistently over the last two years as they sought to leverage the growth of generative AI chatbots to drive computer sales.
Shoppers, however, have been either too reluctant or too savvy to buy into manufacturer-made AI PC hype.
“PC OEMs had trouble selling the on-device AI message even before the memory shortages,” Ubrani said.
::wiggles hands::Computers pretty much reached a level of performance where you need a stopwatch to tell the difference years ago. Side by side with a modern system, sure, often very noticeable differences.
Came here to say exactly this, it is most definitely a bad thing.Yes. Yes it is.
Why would it be a good thing? It's a marketing fad based on software that won't exist in a few years. Remember Cortana?
It's a waste of space on the silicon. You could replace that stuff with more cache.
Satya Nadella had sent an email to engineering heads expressing disappointment with the consumer version of Copilot. He reportedly wrote that tools for connecting Copilot with Outlook and Gmail “for [the] most part don’t really work” and are “not smart.”
Copilot is not running locally on your PC. ("AI PCs" were all about local inference, which could have been a good thing, but was of course terribly implemented.)My Copilot experiments have been unimpressive.
As this is a post about AI PCs, I assume you're talking about local models. It's awesome that you're trying out local LLMs, but most "AI PCs" can't run anything but the smallest local models. These have very limited use cases, and certainly can't compete in world knowledge stakes.I keep testing LLMs on areas that might be considered "tricky" where I'm an actual expert.
I'm seeing if they get the answers right, or are very confidently wrong and try to gaslight me on this.
So far I'm seeing about a 25% failure rate, as of a month ago.
I'm guessing you don't have many tabs open? The amount of memory the OS uses these days is in a minority compared to how much a web browser needs. Switching form Windows to Mint might save you a few hundred MB of RAM. Opening one modern web page will eat all that RAM back up.You could just use a Linux Distro for that, I am running Linux on a 17 year old machine with just 4 GB of ram and it works fine for what I use it for.
"Bought."When Robert R. Taylor bought the entire supply of hand pumps it was not because he needed that much supply for his product, it was in order to maintain exclusivity in the liquid soap market. He knew that if he bought the entire supply, his competitors would be hamstrung getting competing products to market, giving him an advantage.
I sincerely doubt that OpenAI needs the 40% of global DRAM wafers (not chips, packages, modules, or cards) that it has bought.
Hypothetically, if they're selling 40% of their capacity to a major customer, who never pays them a cent, but the other 60% goes to a market with prices 500% higher than last year for the exact same product..... might they still come out ahead?What amazes me is that OpenAI is a money furnace that has been burning capital like there's no tomorrow and yet somehow SK Hynix, Samsung, and Micron seem to think that they're good for the money. Even companies that are building data centers are starting to question whether OpenAI will be able to pay when the bill comes.
Now you'll find your machine is more powerful than you need.That's only an issue on Windows, Linux can use as much memory as you're willing to give it.
Source? From my understanding they're fine for this quarter but next quarter could see them hit with large price hikes. They're still in negotiations for 2026 orders: https://www.digitimes.com/news/a20260107VL211/apple-dram-2026-nand-production.html
Clippy popping up and reminding you to run the update for your PC bios and Windows isn't a bad thing. Its bad when "we didn't ask for this!" And if Cortana became the "Cortana" of Halo Master Chief, well, ahem... I would love that as an AI. Until the part her memory corrupted....Yes. Yes it is.
Why would it be a good thing? It's a marketing fad based on software that won't exist in a few years. Remember Cortana?
It's a waste of space on the silicon. You could replace that stuff with more cache.
They are getting there. The last place I worked pretty much had their entire infrastructure running in azure. They claimed it was cheaper than running everything on prem. Riiiiiggghhhtt.I'm sure there are 'think tanks' working out how to dismantle the personal computer market, to replace it with 'computing as a service' cloud systems, so customers own nothing. Ever. It's what big software and game publishers have wanted all along. In addition to massively increasing the barrier to entry for the software/game development business, allowing the existing big players to more easily monopolize the market. At every end of this push, the 'big players' win, and everyone else loses, which is precisely what makes that the outcome those big players will definitely want to try.
Ditto.Linux Ubuntu on a 6 year old machine with 64 gigs.* Windows 7 Pro for a very long time then Linux for me. Tried to like 10 and 11 but, nope. AI diligently filtered out but it's like crabgrass, always some somewhere.
*Slower ram was cheap at the time and COVID was getting underway, so wanted to experiment.
I think there is a significant hype around the need for RAM in (most) consumer PCs, where most I hear would have you believe that 32gb is the bare minimum for Windows and 64 is okay for now but not future proof. I built a new PC in September and was curious about how that held up. RAM was relatively cheap then, but I still got just one 16gb stick intending to get a second one later and test how much better things got. I never hit the ceiling, though, and since it was ddr5 I won't be getting that second stick this year, at least. Even getting a relatively performant CPU feels like a waste, I rarely hit more than 20% load through gaming. If you're actually in it for performance and money is a factor, RAM won't be the issue with pennypinching pre-builts (rather having an underpowered GPU). Or as the article suggests, being able to buy a new budget/mid-range at all.New normal for consumer pc: 8G RAM (was getting to 16)
In corporations, 32GB was already quite much standard for developers etc,
but we are probably getting back to 16G. right?
Between this and the coming GPU apocalypse, game companies will definitely feel the heat to optimize, IMHO.The optimist in me wants RAM pricing to bring in a year of performance optimizations to get some bloat out of software. If systems are shipping with less RAM, it sure would be nice to see OSs, applications, and websites optimize for more efficiency.
I bought a laptop with 32 GB of RAM in 2020. That was not the most you could get even then (though perhaps the most in a thin and light convertible like the one I bought). I certainly think 32 GB is a minimum for a professional software development machine, and has been for years.It's not really a silver lining when the only thing that's changing is 16GB of RAM is getting kicked back up to being 'premium'.