AMD will bring its “Ryzen AI” processors to standard desktop PCs for the first time

GrygrFlzr

Smack-Fu Master, in training
3
I've yet to meet someone who actually wants a Copilot+ system. Like, I guess in theory they must exist somewhere. But I haven't seen one in person.
For a very, very short period of time, Copilot+ branding was unintentionally useful not for the NPU/AI stuff, but in terms of guaranteeing you would at least get 16GB of RAM in a world where a lot of base laptop configurations were stuck on 8GB of RAM. It was something easy to communicate to a layperson in terms of what to look for while purchasing.
That didn't last very long, because, well. waves generally at everything
 
Upvote
113 (114 / -1)

koolraap

Ars Tribunus Militum
2,234
I have, also, never seen a unicorn.
I have at work. He thinks everyone should have one. "To do what?" "Run agentic AI". "To do what?" "I've got a meeting to run to, great chat, Koolraap..."

EDIT: But back to the topic. Kind of cool, useful for a streaming box? I'm still waiting for AI to do something useful with upscaling VHS/DVD-quality content to 4K. Xena, Warrior Princess, I need to watch you again in 4K!

What? :)
 
Upvote
68 (71 / -3)

close

Ars Tribunus Militum
2,445
I've yet to meet someone who actually wants a Copilot+ system. Like, I guess in theory they must exist somewhere. But I haven't seen one in person.
Can this be used for a home server with software like Frigate, Immich, and others that take advantage of "AI"? Because that would be much more useful than "Copilot-whatever".
 
Upvote
54 (55 / -1)

Spazzles

Ars Scholae Palatinae
1,434
Can this be used for a home server with software like Frigate, Immich, and others that take advantage of "AI"? Because that would be much more useful than "Copilot-whatever".
Funny enough, Copilot+ CPUs seem to break Frigate hardware acceleration.

I've no doubt that it can be made to utilize specific NPUs, but the support certainly isn't automatic.
 
Upvote
29 (29 / 0)

Belphegor

Wise, Aged Ars Veteran
125
Subscriptor
The Pro variant of APUs has always been ideal in a NAS due to the low idle power consumption (thanks to the monolithic die) and official ECC support (unlike the consumer variant). Does the new generation still offer ECC capabilities?

As for the NPU, hopefully it can still be disabled in the BIOS settings.
 
Upvote
52 (52 / 0)

LeoRed

Wise, Aged Ars Veteran
133
Since it looks like we're going to be saddled with NPUs in our hardware from now on, is there anything useful (i.e. NOT AI) that they can be used for?
I mean there could be also lots of useful AI. The issue is really the software stack, how much RAM these NPUs can access, and how fast that RAM is. I would really like to see better AI in games to make NPCs more believable, or just AI enemies ...be better.

For Christ's sake, AoE IV's AI is NOT that much better than AoE 1's... just play a map with a puddle and it will build the Spanish Armada in it. Is MS was serious about pushing NPUs, I bet there would be a lot they could do in their own games to make good use of them.

Instead, they decided to take screenshots of my password and credit card details. I guess that... I will avoid NPUs then? 🤷‍♂️
 
Upvote
74 (79 / -5)

Fred Duck

Ars Tribunus Angusticlavius
7,166
Andrew Cunningham said:
Unlike past launches, AMD is not providing its top-end laptop silicon for desktop use, at least not yet.
What a world. Desktops allow for much higher thermal loads than laptops which would allow chips to run faster, longer, wider, etc. but the notebooks (we can't legally write laptops because of how hot they have been in the last several generations) get the more performant chips?

For Christ's sake, AoE IV's AI is NOT that much better than AoE 1's... just play a map with a puddle and it will build the Spanish Armada in it.
I didn't expect the Spanish Armada.

Remember when GPUs first appeared on the market and pundits considered perhaps someone would eventually make Physics Processing Units to calculate better physics and that experience designers would simply model real-world physics for everything even down to the vibrations of vocal cords in order to make speech?

The problem is if it fundamentally changes how the in-experience world works, then it'll be another system requirement and as it was, proper GPUs were difficult enough to wrangle. If say, you could purchase a card to make the hit tie-in Jurassic Park Trespasser behave, how many people would do that for one title? How many publishers would be willing to tie their fortunes to a nebulous bit of hardware like that? Did PhysX set the world on fire? Or was it mostly used for incidental animations that didn't affect anything important? I remember Nvidia dropped support for a bit.
 
Upvote
31 (32 / -1)

Chinsukolo

Ars Scholae Palatinae
987
Subscriptor++
I didn't expect the Spanish Armada.

Nobody expects the Spanish Inqu... wait a sec....

1772451485893.png
 
Upvote
21 (26 / -5)
Article :
At this point, it doesn’t seem as though AMD will be offering boxed versions to regular consumers
Which is a shame since the reason i moved from Intel to AMD is that the AM5 socket provides an upgrade path, where Intel change their socket too frequently. It also offers an upgrade path for the graphics too.

The current laptop AMD NPU is BFP16, so i assume the latest chip will be the same ?

The benefit is running calculations on the NPU as opposed to the graphics IC, and hopefully the tools will be available to use the NPU via python or other.
 
Upvote
13 (14 / -1)

close

Ars Tribunus Militum
2,445
Funny enough, Copilot+ CPUs seem to break Frigate hardware acceleration.

I've no doubt that it can be made to utilize specific NPUs, but the support certainly isn't automatic.
I read now the Frigate docs and you're right, they list support for Intel NPUs but not AMD. One could hope that when there are enough on the market the software support follows.
 
Upvote
14 (14 / 0)
These need pairs of fast DDR5 sticks to maximize their performance, and prices for fast DDR5 sticks have shot into the stratosphere over the last year.
More importantly...Strix Halo doesn't just need fast RAM--it needs fast RAM that is stable. There's a reason strix halo is soldered--because getting 8000MT/s ram to be stable in SODIMM or DIMM modules is extremely miss or hit (with emphasis on miss). Just replacing solder balls with pins and fingers makes a massive engineering challenge that isn't easily overcome.

Hence all the Strix Halo minipcs having soldered memory. Which is also why Mac Studio and its 500GB/s memory is soldered too.
 
Last edited:
Upvote
53 (54 / -1)
Can this be used for a home server with software like Frigate, Immich, and others that take advantage of "AI"? Because that would be much more useful than "Copilot-whatever".
The advantage in the "AI" branded SoCs as they are now, isn't AI or Copilot. It is being able to have a massive amount of fast memory shared between CPU and GPU. For Strix Halo that is 200GB/s of up to 128GB of memory. A Mac Studio with 128GB of memory would cost over double that, although the memory there would be 500GB/s.

Which for certain workloads is a downright steal and amazing--because while Ryzen395 system (16 cores 32 threads) with 128GB of RAM might (now) cost $2,500, the cost of an RTX6000, with 96GB of VRAM, is $10,000 in a box with nothing else to make it work.

Catch being, most people aren't doing those workloads. And the GPU is roughly equivalent to an RTX5060 laptop chip, which isn't necessarily a slouch and is "fine" for many people but not AAA gaming at higher resolutions.
 
Upvote
27 (28 / -1)

hyartep

Wise, Aged Ars Veteran
119
Since it looks like we're going to be saddled with NPUs in our hardware from now on, is there anything useful (i.e. NOT AI) that they can be used for?
AI and NPUs - in the past also called "digital signal processor" - can be used for useful things, such as graphics up-scaling, mic noise reduction, background blur in video and other stuff.
Unfortunately, most algorithms still rely on CPU or GPU.
 
Upvote
27 (29 / -2)
Post content hidden for low score. Show…
I've yet to meet someone who actually wants a Copilot+ system. Like, I guess in theory they must exist somewhere. But I haven't seen one in person.
It depends on what you mean. I use AI daily, for one thing or another. I would very much like to be able to run it locally, instead of feeding some company all my information. However, even on my (pretty modern) PC, it is just too slow.

If these chips would make a local model acceptably fast, I would buy them.

Edit: I don't believe they will be fast enough. I use a separate GPU for acceleration, and I doubt that an on-chip NPU will be anywhere as fast as a separate GPU. Objective test results anyone?
 
Upvote
-5 (12 / -17)

dahak777

Smack-Fu Master, in training
90
More importantly...Strix Halo doesn't just need fast RAM--it needs fast RAM that is stable. There's a reason strix halo is soldered--because getting 8000MT/s ram to be stable in SODIMM or DIMM modules is extremely miss or hit (with emphasis on miss). Just replacing solder balls with pins and fingers makes a massive engineering challenge that isn't easily overcome.

Hence all the Strix Halo minipcs having soldered memory. Which is also why Mac Studio and its 500GB/s memory is soldered too.
I wonder what ever happened with the CAMM2 standard that looked promising.

I know it started out as a dell laptop spec but JEDEC was looking into it too, not sure if it got adopted as it should solve some of this as its support to offer faster speeds like soldered on but upgradable

Although that is some what moot now given all the ram pricing.
 
Upvote
17 (17 / 0)
I wonder what ever happened with the CAMM2 standard that looked promising.

I know it started out as a dell laptop spec but JEDEC was looking into it too, not sure if it got adopted as it should solve some of this as its support to offer faster speeds like soldered on but upgradable

Although that is some what moot now given all the ram pricing.
Part chicken and egg. Partially the RAM crisis.

Also...With CAMM2--you only get one card/bus to put all your memory on. So there's no gradual-upgrade path to add-to your memory pool as with desktops with say 2x out of 4x DIMM slots populated. You want to upgrade your CAMM2 memory pool-you're throwing away the entire memory daughter board. So while it is "better" than soldered RAM in terms of upgradeability down the road in that it is at least possible; there's still a big e-waste problem, and also the cost problem as to go from say 64GB to 128GB you have to buy ALL NEW memory chips and can't reuse any of the chips like with SODIMM or DIMM.

In theory...board makers could add more CAMM interfaces, but that would add cost and complexity--that is reserved for platforms like Threadripper. Why? Because a single CAMM2 "slot" already fully uses a dual-channel 128bit bus. Hence the comparison to many-channel platforms like Threadripper.
 
Upvote
13 (13 / 0)

midnitet0ker

Smack-Fu Master, in training
29
On today's episode of "Products No One Is Asking For".

Seriously, what's the whole point of a Copilot PC? Even OEMs haven't been able to figure out how to get consumers wet over paying extra to go to the added trouble of running something locally that runs reasonably well over an internet connection and is maintained by an army of nerds and drunk investors. AMD is late to the game but who knows, maybe they'll carve a market where others have failed. Or maybe this will be more deadwood in the AI bonfire! 😁

"This one's got Copilot" is the the tech equivalent of "This one goes to eleven."
 
Upvote
-2 (8 / -10)
I wonder what ever happened with the CAMM2 standard that looked promising.

I know it started out as a dell laptop spec but JEDEC was looking into it too, not sure if it got adopted as it should solve some of this as its support to offer faster speeds like soldered on but upgradable

Although that is some what moot now given all the ram pricing.
I believe that it is now fully JDECed; though far more niche than their more usual work.

I don't think it is going to be relevant for this part; because AM5 doesn't do LPDDR; but we'll see if it shows up as an alternative to soldered for the BGA CPUs.
 
Upvote
1 (1 / 0)

solomonrex

Ars Legatus Legionis
13,516
Subscriptor++
Waste of die space.
The dedicated chips for AI is still useful for normal tasks on Windows, such as photo searching, as it was before the recent generative AI bubble. Machine learning already existed and already had dedicated silicon on, say, the iphones.

But I get it, it's a little suspect from a company that has submarined normal UI interactions like start menu search.
 
Upvote
15 (15 / 0)
I have a copilot plus ryzen 350. I'm still trying to figure out how to disable all of the bullshit from copilot
It is an unending war. For my gaming system, I regularly need to run ShutUp10 because updates re-enable all the crap. Ultimately the only way to stop it is to not fight it--and install a different OS. I never see Copilot nonsense on my strix halo system--because it doesn't have Windows on it (Framework has excellent Linux support as such things goes).

MS wants to force this stuff on everyone--because you are not the customer, you are the product in spite of paying for it.
 
Upvote
16 (16 / 0)

S_T_R

Ars Tribunus Militum
2,784
I mean there could be also lots of useful AI. The issue is really the software stack, how much RAM these NPUs can access, and how fast that RAM is. I would really like to see better AI in games to make NPCs more believable, or just AI enemies ...be better.

I knew a guy that was experimenting with (what was then called) machine learning and games a decade ago. It independently adapted to changes in both player tactics and game settings. E.g. adjust a rifle's stats, and it would use it more or less depending on effectiveness.

The problem was that people don't actually want it. It behaved optimally, not realistically. The AI would learn to aggressively min-max their playstyle. Imagine a pro player that leaned hard on cheese, that was supernaturally able to time exploits. It was hard to tune it down too. Make it too dumb to figure out the exploit and it would also fail to understand what the player was doing and how to respond. It was computationally cheaper, and easier to tune, with conventional heuristic methods.

Models are larger these days, but I think the underlying issue remains the same: LLM's are plausible because people just see the end product. Interactive game ML/AI immerses people in how AI makes the proverbial sausage and the steps it takes to get to end will break the suspension of disbelief.
 
Last edited:
Upvote
10 (12 / -2)

khumak50

Ars Tribunus Militum
1,532
I'm not sold on the use case for an NPU in a desktop. For a mobile device without a discrete GPU I can see it being useful for running AI functions that don't require much performance, but for a desktop only the very bottom tier of desktops are likely to lack a discrete GPU for most consumers.

Last I heard NPUs are FAR weaker at AI tasks than pretty much any GPU. So the only use case I could see would be running a local AI model that requires more memory than the VRAM on your GPU. But for a local model with that sort of demand, will an NPU have enough performance to even be relevant? Can I run that 64GB local model that my 5090 can't run due to memory constraints on my NPU if I have 64GB of system ram? I tend to doubt it. (I don't have a 5090, just making my point).

So from my perspective either an NPU can run a bigger model than a GPU if you have enough system ram in which case it's potentially relevant or it can't in which case it's a waste of silicon that I would rather not pay for.

Are they planning to continue selling desktop CPUs without an NPU? If so then ok non issue. If not then we're looking at AMD raising prices for wasted silicon.
 
Upvote
-5 (4 / -9)

evan_s

Ars Tribunus Angusticlavius
7,314
Subscriptor
Part chicken and egg. Partially the RAM crisis.

Also...With CAMM2--you only get one card/bus to put all your memory on. So there's no gradual-upgrade path to add-to your memory pool as with desktops with say 2x out of 4x DIMM slots populated. You want to upgrade your CAMM2 memory pool-you're throwing away the entire memory daughter board. So while it is "better" than soldered RAM in terms of upgradeability down the road in that it is at least possible; there's still a big e-waste problem, and also the cost problem as to go from say 64GB to 128GB you have to buy ALL NEW memory chips and can't reuse any of the chips like with SODIMM or DIMM.

In theory...board makers could add more CAMM interfaces, but that would add cost and complexity--that is reserved for platforms like Threadripper. Why? Because a single CAMM2 "slot" already fully uses a dual-channel 128bit bus. Hence the comparison to many-channel platforms like Threadripper.

Adding additional memory is pretty hit or miss anyway. Even on desktop platforms. You don't really want to be running a single dimm in single channel mode unless you have to and even then getting an exact matching second dimm isn't always easy. Upgrading from 2 dimms to 4 dimms with two dimms per channel is possible if the board supports it but can often impact ram timings even if you do get a matched second set.

A pulled dimm set or CAMM2 module isn't necessarily E-Waste either. It could be sold as used and be an upgrade for someone else who started out with even less memory or be used for memory in a new build. Even if the CAMM2 module is E-Waste after the upgrade that's better than the entire board becoming E-Waste due to soldiered on memory,

I don't really see much value in CAMM2 for desktop motherboards and standard DDR5. Might allow for some extreme memory overclocks on desktops but that seems to be about it. Dimm and SODIMM slots work fine for that anyway. LPDDR on the other-hand is very useful to get away from being soldiered on memory. I think it will definitely come to that side of things. Not because OEMs care about customer but because it makes logistics easier for them as they take one thing out of the configuration matrix of different of motherboards for the options they offer.
 
Upvote
6 (6 / 0)