Buy your RAM (and SSDs) now.

hansmuff

Ars Tribunus Angusticlavius
9,622
Subscriptor++
I'm not sure how that helps us little folk, though. Because even if that was the realistic scenario, prices would still rocket through the roof, and only the billionaires get to buy larger quantities.
What I took away from it is, nobody wants to really hoard so much it becomes a liability and the initial allocation in itself means little because of details in those contracts that may significantly reduce the actual number of wafers (or *RAM) said to be 'reserved'.
Short term of course yes, it's gonna suck short term, but it'll tock back.
 
Several of the bigger YouTubers have been building 265K systems lately because they are evidently at a really nice price point right now.
On the Intel side, Nova Lake is set to double the number of P cores, double the general purpose x86 registers to 32 through APX and bring full AVX512 on E cores. AMD is similarly looking strong next generation. Even if all this RAM stuff wasn't happening, this is possibly the least appealing time to upgrade a CPU in 20 years, like getting a "good" deal on a Pentium 4 right before the Core 2 Duo launched.
 

fitten

Ars Legatus Legionis
54,711
Subscriptor++
On the Intel side, Nova Lake is set to double the number of P cores, double the general purpose x86 registers to 32 through APX and bring full AVX512 on E cores. AMD is similarly looking strong next generation. Even if all this RAM stuff wasn't happening, this is possibly the least appealing time to upgrade a CPU in 20 years, like getting a "good" deal on a Pentium 4 right before the Core 2 Duo launched.

Yeah, I wouldn't upgrade anytime soon, either. There's also the new instructions (branching related and three operand instructions) in addition to the larger register file but binaries have to be recompiled to take advantage of that... or use some JIT that can make use. Still... I've been eager to see APX for a good while.
 
  • Like
Reactions: continuum

Anonymous Chicken

Ars Tribunus Militum
1,865
Subscriptor
Even if all this RAM stuff wasn't happening, this is possibly the least appealing time to upgrade a CPU in 20 years, like getting a "good" deal on a Pentium 4 right before the Core 2 Duo launched.
Eh, well now P4 was legendarily bad. Also, seems to me that there just isn't a lot of immediate use for more CPU, for almost anyone. More IPC gets used to reduce observable latency if nothing else, but people still consider 6 cores viable for a new gaming PC. You know, if some game comes out that eats cores to simulate the world then I'll be all over it, but I don't see that game.
 

Ardax

Ars Legatus Legionis
19,735
Subscriptor
On the Intel side, Nova Lake is set to double the number of P cores, double the general purpose x86 registers to 32 through APX and bring full AVX512 on E cores. AMD is similarly looking strong next generation. Even if all this RAM stuff wasn't happening, this is possibly the least appealing time to upgrade a CPU in 20 years, like getting a "good" deal on a Pentium 4 right before the Core 2 Duo launched.
Man, why did I check this thread?

I'm literally waiting on UPS to deliver a new laptop. Mostly because my current one died, but still.

🫠 😂

I can laugh about it, and I'm sure I'll be fine with what I end up with.
 
AM5 will stand ready for Zen6 and Zen7. Intel platforms, on the other hand ...
Zen 6 for sure... but Zen 7 is entirely conjecture at this stage. I'd be more surprised if Zen 7 didn't jump to AM6.

With Zen 5 the X3D chips began showing performance improvements in general purpose applications, something that wasn't there on Zen 4 X3D. It means the design is becoming memory bottlenecked enough to begin showing up in a wider array of workloads besides just games and or scientific programs. Zen 6 is increasing the core count by 50%, which means more hungry cores to feed with data. Ergo, it's going to need a good memory controller improvement to keep everything fed with data.

So take it to the next step, AMD's going to need to pick from three options to increase the memory bandwidth again for Zen 7.... either add additional memory controllers, significantly boost the existing memory controller's performance capabilities at high cost (and hope it's sufficient & reliability isn't problematic for end users), or adopt DDR6. Two out of three of those are going to require a new socket.

I wonder if AMD will lean into that "drop in upgrade" aspect? It seems like RAM pricing will still be getting in the way of selling new Zen 6 systems, based on the timescales I've seen up to now.

Why wouldn't they? The three generation, 6+ year single platform support was a huge selling point for Zen 4. I built a 7700X system a month after the platform launched because of it. The current insanity with memory prices is a great example of why just building a system once and tossing nearly all of it out per upgrade isn't a great idea. I am very much looking forward to dropping a 10-core Zen 6 X3D chip into my system next year. The sim rate uplift in Stellaris alone was beyond ludicrous over my 4970K, so after seeing what the X3D cache further adds to that in GN's tests I'm actually very excited to get my grubby paws on Zen 6 X3D next year.
 

evan_s

Ars Tribunus Angusticlavius
7,314
Subscriptor
Zen 6 for sure... but Zen 7 is entirely conjecture at this stage. I'd be more surprised if Zen 7 didn't jump to AM6.

With Zen 5 the X3D chips began showing performance improvements in general purpose applications, something that wasn't there on Zen 4 X3D. It means the design is becoming memory bottlenecked enough to begin showing up in a wider array of workloads besides just games and or scientific programs. Zen 6 is increasing the core count by 50%, which means more hungry cores to feed with data. Ergo, it's going to need a good memory controller improvement to keep everything fed with data.

So take it to the next step, AMD's going to need to pick from three options to increase the memory bandwidth again for Zen 7.... either add additional memory controllers, significantly boost the existing memory controller's performance capabilities at high cost (and hope it's sufficient & reliability isn't problematic for end users), or adopt DDR6. Two out of three of those are going to require a new socket.

I think Zen 7 will depend largely on the timing of it and DDR7. I think AMD feels that AM5 was a little bit early and the early high costs of DDR5 held back the platform at launch. I don't think they want to repeat that with AM6.

I don't disagree with Zen 5 on desktop being held back somewhat by the I/O die and memory controller but it's also the same exact die I/O as Zen 4. On servers that doesn't seem to be an issue and it did get a new I/O die. There's also the possibility that Zen 6 or Zen 7 will switch how the I/O die is connected to the CPU dies to use something more like Strix Halo that could have performance and latency improvements. They could also go for CUDimm support which is already showing the potential for ~50% bandwidth over typical DDR5 speeds on current AM5 platforms. You could easily have a situation where you end up with an AM5+ socket or something along those lines. The Zen 7 chips will drop into a b650 board but won't be guaranteed to have as high a memory clock speed or maybe CUDimm support but still be fine on the single CCD chips. From what I understand the current IF links aren't even fast enough to fully max out the current memory bus on a single CCD chip.

There are other possibilities too. They could do the "low end" single CCD chips like normal and make all of the dual CCD high core count chips single or double x3d cache chips. They could do it as a chip that spans sockets. They could make a single I/O die that supported both DDR5 and DDR6 for socket AM5 and AM6 respectively. It probably also wouldn't be too hard to make different I/O dies and combine them with the same CCD dies to make AM5 and AM6 versions of the same chips. For those upgrading or on more of a budget, buy the AM5 version and drop it into your existing platform or buy a cheaper platform that doesn't require more expensive RAM and probably new MBs that cost more etc but you might loose a little performance due to ram speed and probably loose out on some improvements to I/O as presumably AM6 would hopefully support more/faster PCI-E, USB, etc along with just the RAM change.
 

fitten

Ars Legatus Legionis
54,711
Subscriptor++
I don't disagree with Zen 5 on desktop being held back somewhat by the I/O die and memory controller but it's also the same exact die I/O as Zen 4.
Thus X3D parts. They show what is possible, I guess. As with the IMC, "honking big caches" are something you can do only once, really.... I guess unless the company makes a GB one or something sometime soon. So if you're rocking an X3D part, you're ameliorating that I/O die's memory inefficiencies.
 

evan_s

Ars Tribunus Angusticlavius
7,314
Subscriptor
Thus X3D parts. They show what is possible, I guess. As with the IMC, "honking big caches" are something you can do only once, really.... I guess unless the company makes a GB one or something sometime soon. So if you're rocking an X3D part, you're ameliorating that I/O die's memory inefficiencies.

Sure. You only get that XX% performance improvement over baseline once because then you've reset the new baseline. In the discussion of how do you put potentially 32 Zen 7 cores in an AM5 socket and not end up completely memory bandwidth starved two x3D v-cache chips is a pretty viable answer. Potentially what would be unique there is the dual x3D v-cache chip might be the only option for a 24 or 32 core chip with no "regular" versions available for the larger dual CCD cpus. Once you get down the line, especially to the binned chips you could still have regular 8 or 10 core chips and not really have problems with memory bandwidth on the AM5 socket.
 
  • Like
Reactions: fitten
I think Zen 7 will depend largely on the timing of it and DDR7. I think AMD feels that AM5 was a little bit early and the early high costs of DDR5 held back the platform at launch. I don't think they want to repeat that with AM6.
For sure. And yes that's all very true, especially relative to DDR4 prices back then. AMD won't want to repeat it, but they also need to keep the cores fed properly if they want to keep progressing performance. Either way they have three (or I guess four) options to pick from to make it happen.
I don't disagree with Zen 5 on desktop being held back somewhat by the I/O die and memory controller but it's also the same exact die I/O as Zen 4. On servers that doesn't seem to be an issue and it did get a new I/O die. There's also the possibility that Zen 6 or Zen 7 will switch how the I/O die is connected to the CPU dies to use something more like Strix Halo that could have performance and latency improvements. From what I understand the current IF links aren't even fast enough to fully max out the current memory bus on a single CCD chip
That's my point, same exact IO die. Which is why reviewers were extremely surprised when Zen 4 X3D parts showed no perf gains in workstation programs, but the Zen 5 X3D parts did.

That's an excellent point, I forgot someone theorycrafted it was the IF links and not the IMCs starving the cores. I haven't heard anything since on that so for all I know it could be the IF links. Did anyone conduct testing on this? I know Strix halo's direct connection technology that removed to the need for IF was credited with its incredible performance.

I would absolutely, totally love to see TSMC's InFO-oS direct connection tech used on mainstream CPUs, but I imagine it incurs direct costs and additional defect cost overhead to what were originally functioning parts. And TSMC may have a limited packaging capacity for InFO-oS, just like TSMC's limited 3D packaging tech was bottlenecking H100 production. So I would be very surprised if we actually did see it on mainstream chips. AMD could justify the costs in EPYC parts though if TSMC had the packaging volumes to handle it... As cool as Strix Halo is, that it barely exists in even <5 laptops nearly a year after its debut speaks volumes.

Feels like CUDIMMs will become de rigeur, at least at the high end. $deity knows how much they will be in a year or two!

That's my biggest hope for DDR6... motherboard venders seem unwilling to jump to new tech with the DDR5 generation, but when DDR6 starts they might start with CUDIMMs, or even CAMM2 modules that had clock drivers integrated. Heard of prototypes of both for DDR5 but I'd take either one at this point. for next-gen DDR6.
 

IceStorm

Ars Legatus Legionis
26,114
Moderator
Samsung is exiting the SATA SSD market:


View: https://www.youtube.com/watch?v=qtQzR4ASkW8


They'll formally announce it in January. They're ending production in the next couple years, but they will be halting sales to new customers without existing contracts. Samsung is pivoting to premium products for B2B sales - HBM, GDDR7, and NVMe drives.

Additionally, another source in retail was warned by a SSD company's rep that SATA drives would be harder to get in 2026.

If you need a SATA SSD, buy it now.

m.2 drives are more profitable. Not only can they demand higher prices because the devices are faster, they're also far less complex - there's no shell, there's no connector, there are no screws.

I personally think SATA SSDs peaked when Micron's 4TB MX500 drives were going for $170 a couple years ago. I've never seen them that cheap since.

So when do we get stuff at decent prices? 2027-2028. That is the forecast, that the AI models being trained now will be ready to run locally in a couple years. We will need local storage and memory to run them, so the prices will have to come down since if they don't, no one will use them.

As far as memory goes, his sources are saying that memory prices will start to come down at the end of 2026. The peak is either now, or in 1-3 months. Once the panic buying subsides, and the hoarders realize they cannot sell for as much as they wanted, things will come down a bit, but the supply will start to recover in 9-18 months, not 3 years or so.
 
Last edited:
Eh, well now P4 was legendarily bad.

In retrospect, yes. But at launch it was more or less competitive:

1765603688374.png


Arrow Lake is a similar product. Competitive at launch but a little behind while also using more power. I suspect though it will be remembered similarly to the Pentium 4, at least if Zen 6 and Nova Lake deliver.
 
I think Zen 7 will depend largely on the timing of it and DDR7. I think AMD feels that AM5 was a little bit early and the early high costs of DDR5 held back the platform at launch. I don't think they want to repeat that with AM6.

I don't disagree with Zen 5 on desktop being held back somewhat by the I/O die and memory controller but it's also the same exact die I/O as Zen 4. On servers that doesn't seem to be an issue and it did get a new I/O die. There's also the possibility that Zen 6 or Zen 7 will switch how the I/O die is connected to the CPU dies to use something more like Strix Halo that could have performance and latency improvements. They could also go for CUDimm support which is already showing the potential for ~50% bandwidth over typical DDR5 speeds on current AM5 platforms. You could easily have a situation where you end up with an AM5+ socket or something along those lines. The Zen 7 chips will drop into a b650 board but won't be guaranteed to have as high a memory clock speed or maybe CUDimm support but still be fine on the single CCD chips. From what I understand the current IF links aren't even fast enough to fully max out the current memory bus on a single CCD chip.
Server is horribly bottlenecked by DDR5 and getting worse each generation. It has gotten so bad that Intel announced that they're not going to make an 8 channel Diamond Rapids part and will instead be using 16 channels of MRDIMMs (which each multiplex 128 bits of DDR5 into a single channel), which is insane. To control costs and keep core counts scaling, I expect AMD and Intel to both move to DDR6 as soon as physically possible. Since AMD (more so than Intel) keeps their desktop and server platforms in sync, I suspect that Zen 6 will be the last generation compatible with a DDR5 IO die. Or possibly Zen 7 launches with support for both DDR5 and DDR6 if prices and availability don't line up with when Epyc needs to jump to DDR6.
 

evan_s

Ars Tribunus Angusticlavius
7,314
Subscriptor
Server is horribly bottlenecked by DDR5 and getting worse each generation. It has gotten so bad that Intel announced that they're not going to make an 8 channel Diamond Rapids part and will instead be using 16 channels of MRDIMMs (which each multiplex 128 bits of DDR5 into a single channel), which is insane. To control costs and keep core counts scaling, I expect AMD and Intel to both move to DDR6 as soon as physically possible. Since AMD (more so than Intel) keeps their desktop and server platforms in sync, I suspect that Zen 6 will be the last generation compatible with a DDR5 IO die. Or possibly Zen 7 launches with support for both DDR5 and DDR6 if prices and availability don't line up with when Epyc needs to jump to DDR6.

Yeah server is a different beast. IIRC Zen 5 server chips support higher DDR5 speeds stock already. It makes sense that with more and more cores being thrown in there memory bandwidth would be important.

AFAIK server and desktop have never shared I/O die so I could definitely see Zen 7 server going DDR6 right away while desktop stays on DDR5 with the different I/O dies handling most of that complexity so the CCDs stay the same and shared between desktop and server. Then again the rumor mill says they will have a lot more variety on Zen 7 CCDs.
 

Anonymous Chicken

Ars Tribunus Militum
1,865
Subscriptor
In retrospect, yes. But at launch it was more or less competitive:

View attachment 124120

Arrow Lake is a similar product. Competitive at launch but a little behind while also using more power. I suspect though it will be remembered similarly to the Pentium 4, at least if Zen 6 and Nova Lake deliver.
What, P4 (Prescott) was literally the point that the high-clock low-IPC concept was shown as a failure. IIRC a few things like Doom3 were kind to it, probably because the code was hand-massaged by talented fingers. Arrow Lake is neither the starting point nor ending point of the concepts in its design, it just seems to have .... some issues with inter-tile communication, or something. Its not fundamentally bad. P4 was.
 

Demento

Ars Legatus Legionis
15,353
Subscriptor
I think I've already missed the boat on SATA SSDs. I thought I'd grab a pair of 2TB ones to replace the 3TB spinning rust in my NAS, but it's running quite a lot more expensive than even 6 months ago, and I'm not that impatient with my NAS access that I'm going to drop that much on it. Not sure if that's the RAM availability coming to roost early, or just low availability of SATA devices now that nvme is "normal".
 

fitten

Ars Legatus Legionis
54,711
Subscriptor++
I still have some traditional HDD in my machines... just bulk storage that doesn't need to be super fast... stuff like music and videos. I also use it for scratch space on my Linux box... just a 4T partition to mess around in. The other thing I was worried about was I have several low end boxes (including my wife's gaming machine) and Windows/Nvidia is ending support for the 10x0 cards so I got some 5050 replacements. I also have a 1050 in my Linux box... mostly just for a frame buffer but figured I'd upgrade it as well. Hopefully those will last another 10 years. 5050s are considered kinda crap but considering they're replacing 1050 cards and a 1660... pretty big upgrade.
 
What, P4 (Prescott) was literally the point that the high-clock low-IPC concept was shown as a failure. IIRC a few things like Doom3 were kind to it, probably because the code was hand-massaged by talented fingers. Arrow Lake is neither the starting point nor ending point of the concepts in its design, it just seems to have .... some issues with inter-tile communication, or something. Its not fundamentally bad. P4 was.
Relative to the competition, Arrow Lake is probably worse than Prescott, which was actually quite competitive in its own day. You can argue that it was the point where a concept failed, but people don't buy CPUs for concepts they buy them for performance.

The failed concept in Arrow Lake was Intel's disastrous outsourcing to TSMC, which has nearly destroyed the company.
 
Yeah server is a different beast. IIRC Zen 5 server chips support higher DDR5 speeds stock already. It makes sense that with more and more cores being thrown in there memory bandwidth would be important.

AFAIK server and desktop have never shared I/O die so I could definitely see Zen 7 server going DDR6 right away while desktop stays on DDR5 with the different I/O dies handling most of that complexity so the CCDs stay the same and shared between desktop and server. Then again the rumor mill says they will have a lot more variety on Zen 7 CCDs.
Has AMD ever had the server I/O die support fundamentally different memory technology than the corresponding desktop die? My impression is that Zen 4 got dragged to DDR5 maybe a generation early (and while Intel supported DDR4) because they didn't want to design an DDR4 IO die that worked with that core. I suppose they're a bigger company with more resources now, but still suspect we'll see the desktop and server IO dies track each other relatively closely unless DDR6 is literally not available on desktop when its needed on server.
 

evan_s

Ars Tribunus Angusticlavius
7,314
Subscriptor
Has AMD ever had the server I/O die support fundamentally different memory technology than the corresponding desktop die? My impression is that Zen 4 got dragged to DDR5 maybe a generation early (and while Intel supported DDR4) because they didn't want to design an DDR4 IO die that worked with that core. I suppose they're a bigger company with more resources now, but still suspect we'll see the desktop and server IO dies track each other relatively closely unless DDR6 is literally not available on desktop when its needed on server.

I don't recall any time in the past when AMD has had different memory technology on desktop vs server but at the same time I also don't see why it would be hard to just use the same I/O die for Zen 7 on the desktop as the I/O die they used for Zen 6 and that would obviously be DDR5. I don't know how much design effort or constraints that would add to the Zen 7 CCD design vs a CCD that only worked with DDR6 memory because that's what server needs to use. If it's just Infinity fabric links either way and all of the Memory type specific stuff is in the I/O die anyway it seems like it should be pretty easy. Maybe it seems easy because I just don't know enough and there's more work needed on the CCD and it isn't all contained to the I/O die where the physical memory controllers exist. Either way AMD definitely seems to be in a better place to spend what ever extra effort is needed and potentially even handle having different CCD designs between desktop and server now if that is what it actually takes. We seem to be well past the early zen days where as much as possible has to be shared because they don't have the resources to do anything else. Now we've got multiple laptop SOC designs with Strix Halo, Strix Point and even lower end versions that mix in normal cores and C cores.
 

Demento

Ars Legatus Legionis
15,353
Subscriptor
Wha... oh, SATA drives. Makes sense, the only thing going for SATA drives was a cost advantage but they lost that some time ago. Since clearly nobody has wanted to make really high capacity SATA drives they've been a poor choice for awhile now.
Well also if you somehow need 20TB of SSD, you can still stuff a lot more SATA disk into a desktop than nvme. My NAS is SATA only, and I was hoping the prices to make it solid state were better than they actually are. I really only use about 2TB worth, so it seemed a good plan given the NAS and network are all 2.5gbe.
 
  • Like
Reactions: Kyuu

Aeonsim

Ars Scholae Palatinae
1,237
Subscriptor++
Server is horribly bottlenecked by DDR5 and getting worse each generation. It has gotten so bad that Intel announced that they're not going to make an 8 channel Diamond Rapids part and will instead be using 16 channels of MRDIMMs (which each multiplex 128 bits of DDR5 into a single channel), which is insane. To control costs and keep core counts scaling, I expect AMD and Intel to both move to DDR6 as soon as physically possible. Since AMD (more so than Intel) keeps their desktop and server platforms in sync, I suspect that Zen 6 will be the last generation compatible with a DDR5 IO die. Or possibly Zen 7 launches with support for both DDR5 and DDR6 if prices and availability don't line up with when Epyc needs to jump to DDR6.

There is no desperate need for AMD to jump to DDR6/7 and one of the major points of a chiplet based CPU design is it gives you options. You can keep the same compute cores with different IO dies, mixing and matching parts to get the exact product you want to deliver.

Now it's reasonably clear that Zen 4 and especially Zen 5 cores are somewhat bandwidth constrained particularly on desktop, however there are multiple ways to solve this with out going to a newer more expensive DDRx version.

On the memory side are two different technologies that can drastically increase bandwidth:
CUDIMM - which currently has DDR12000 modules for desktop systems (currently supported by intel).
MRDIMM (Intel's MCRDIMM) - which is used in Intel servers, currently at DDR8800 and targeting DDR12800

Switching to either of those techs on an IO die would allow them to approximately double bandwidth without needing to switch to DDR6+.

The thing to remember is AMD currently has something like 5 different IO dies in production or use:
  1. Zen 3 Desktop IO die supporting DDR4 - infinity fabric interconnect, 2 channel memory controller
  2. Zen 3 Server IO die DDR4 - infinity fabric interconnect, 12 channel memory controller
  3. Zen 4/5 Desktop IO die DDR5 - infinity fabric interconnect, 2 channel memory controller
  4. Zen 4/5 Server IO die DDR5 - infinity fabric interconnect, 12 channel memory controller
  5. Strix Halo IO die LPDDR5 - different variant of infinity fabric interconnect, 4 channel memory controller
There are likely different variants of the server IO dies as well or possibly Threadripper versions given the 4/8 memory channel threadrippers. In theory as most if not all of these use variants of infinity fabric to connect to the compute dies you could connect a Zen 3 CCD to a Zen 5 IO die or vice versa (though given Zen4/5 need more memory bandwidth connecting to them a DDR4 infinity fabric wouldn't be useful).

Thus why go to DDR6 when they could make a revision of there current DDR5 IO dies that support MRDIMM or CUDIMM, that would effectively double the memory bandwidth and still work with the existing motherboards and cpu sockets.

The other limiting factor for memory bandwidth is the Compute chiplet to IO die interconnect GMI-Narrow is 64GB down, 32GB up (used on some Epyc and all desktop Zen see https://chipsandcheese.com/p/amds-epyc-9355p-inside-a-32-core ). They also have GMI-Wide which doubles it to 128GB/64GB if they double the memory bandwidth to the memory controller they'll need to either switch to GMI-Wide or increase the transfer rate of GMI-Narrow.

Given the above I'd expect Zen 6 and possibly Zen 7 will probably stick with DDR5 though with a new IO die that supports CUDIMM/MRDIMM. In theory if they need more bandwidth than that for Zen 7 they could use two different IO dies a server one that supports DDR6 and needs a new socket and a Legacy-desktop one that supports AM5 and DDR512000 CUDIMMs. Or they could release two desktop CPUs Zen 6.5 using Zen 6 IO die with Zen 7 cores on AM5 and a true Zen 7 using a Zen 7 IO die supporting DDR6 and requiring an AM6 motherboard.
 
redleader said:
The failed concept in Arrow Lake was Intel's disastrous outsourcing to TSMC, which has nearly destroyed the company.
Intel concurrently designed Arrow Lake chips for both internal and TSMC fab to give themselves a last-minute choice on which fab to pick. But their own fabs weren't capable of producing, so their only choice was either not launch it at all or go with the TSMC design. Skipping yet another product generation would've been the far worse option.

Well also if you somehow need 20TB of SSD, you can still stuff a lot more SATA disk into a desktop than nvme. My NAS is SATA only, and I was hoping the prices to make it solid state were better than they actually are. I really only use about 2TB worth, so it seemed a good plan given the NAS and network are all 2.5gbe.
I suppose, but NVMe is better for the application and M.2 drives had the price & capacity advantages. I truly wanted an all solid-state NAS but unfortunately I wanted to guarantee I wouldn't run out of capacity in the next decade if I built one too... and neither M.2 nor SATA could give me that. In your case, one of ASUSTOR's Flashstor boxes would be perfect, they come in 6 or 12 M.2 slot versions that start at $405. They just launched updated Gen 2 designs too. I would've bought one if 16TB M.2s were a thing. But at those capacities the best & cheapest options are U.3.
 
  • Like
Reactions: Demento
There is no desperate need for AMD to jump to DDR6/7 and one of the major points of a chiplet based CPU design is it gives you options.
In addition to the options you listed, AMD can also make some large L3 cache standard for high core count AM5 CPUs (not necessarily X3D, but that's the first thing that comes to mind).

It won't help with "classical" scientific performance (i.e. huge dense matrix multiplies) or heavy "streaming" loads (i.e. AI and other very bandwidth intensive algorithms). But the lion's share of compute intensive stuff (compiling, other parsers, sorting and searching, graphics rendering, and so on) exhibits pretty good memory latency, so a sufficiently huge L3 cache is quite effective to ease the pain of slow RAM.

If one of the augmented DDR5 memory variants clearly wins in the market, AMD will probably jump on that particular train. But if we're still stuck with ordinary DDR5, AMD can bide their time with crazy large caches on Ryzen 7 and Ryzen 9.

Epyc, though, will need a better solution for true HPC apps.
 
Epyc, though, will need a better solution for true HPC apps.
They have solutions for EPYC, back when I mentioned the three possible options, well AMD chose to add additional memory channels. 5th generation EPYC parts are rated for twelve channels @ 6400 speed.

That should underscore just how badly AMD neglected consumer desktops by re-using the same IO die on Zen 5, even the twelve channel server parts using ECC are rated for higher frequencies at 6400. For the longest time server parts used to run their memory ridiculously slower than the standard consumer desktop stuff...
 
  • Like
Reactions: continuum

Aeonsim

Ars Scholae Palatinae
1,237
Subscriptor++
...

Epyc, though, will need a better solution for true HPC apps.

They do: https://www.phoronix.com/review/azure-hbv5-amd-epyc-9v64h or https://www.phoronix.com/review/amd-epyc-azure-hbv2-hbv5
Zen4 with a HBM IO die providing 6.7TB of bandwidth per cpu.

Guess that's another IO die I forgot to add to the list:
  1. Zen 3 Desktop IO die supporting DDR4 - infinity fabric interconnect, 2 channel memory controller
  2. Zen 3 Server IO die DDR4 - infinity fabric interconnect, 12 channel memory controller
  3. Zen 4/5 Desktop IO die DDR5 - infinity fabric interconnect, 2 channel memory controller
  4. Zen 4/5 Server IO die DDR5 - infinity fabric interconnect, 12 channel memory controller
  5. Strix Halo IO die LPDDR5 - different variant of infinity fabric interconnect, 4 channel memory controller
  6. Zen4 Server IO die HBM - 6.7TB bandwidth.