I have, also, never seen a unicorn.I've yet to meet someone who actually wants a Copilot+ system. Like, I guess in theory they must exist somewhere. But I haven't seen one in person.
For a very, very short period of time, Copilot+ branding was unintentionally useful not for the NPU/AI stuff, but in terms of guaranteeing you would at least get 16GB of RAM in a world where a lot of base laptop configurations were stuck on 8GB of RAM. It was something easy to communicate to a layperson in terms of what to look for while purchasing.I've yet to meet someone who actually wants a Copilot+ system. Like, I guess in theory they must exist somewhere. But I haven't seen one in person.
I have at work. He thinks everyone should have one. "To do what?" "Run agentic AI". "To do what?" "I've got a meeting to run to, great chat, Koolraap..."I have, also, never seen a unicorn.
Can this be used for a home server with software like Frigate, Immich, and others that take advantage of "AI"? Because that would be much more useful than "Copilot-whatever".I've yet to meet someone who actually wants a Copilot+ system. Like, I guess in theory they must exist somewhere. But I haven't seen one in person.
Funny enough, Copilot+ CPUs seem to break Frigate hardware acceleration.Can this be used for a home server with software like Frigate, Immich, and others that take advantage of "AI"? Because that would be much more useful than "Copilot-whatever".
I mean there could be also lots of useful AI. The issue is really the software stack, how much RAM these NPUs can access, and how fast that RAM is. I would really like to see better AI in games to make NPCs more believable, or just AI enemies ...be better.Since it looks like we're going to be saddled with NPUs in our hardware from now on, is there anything useful (i.e. NOT AI) that they can be used for?
What a world. Desktops allow for much higher thermal loads than laptops which would allow chips to run faster, longer, wider, etc. but the notebooks (we can't legally write laptops because of how hot they have been in the last several generations) get the more performant chips?Andrew Cunningham said:Unlike past launches, AMD is not providing its top-end laptop silicon for desktop use, at least not yet.
I didn't expect the Spanish Armada.For Christ's sake, AoE IV's AI is NOT that much better than AoE 1's... just play a map with a puddle and it will build the Spanish Armada in it.
I didn't expect the Spanish Armada.
Which is a shame since the reason i moved from Intel to AMD is that the AM5 socket provides an upgrade path, where Intel change their socket too frequently. It also offers an upgrade path for the graphics too.At this point, it doesn’t seem as though AMD will be offering boxed versions to regular consumers
I read now the Frigate docs and you're right, they list support for Intel NPUs but not AMD. One could hope that when there are enough on the market the software support follows.Funny enough, Copilot+ CPUs seem to break Frigate hardware acceleration.
I've no doubt that it can be made to utilize specific NPUs, but the support certainly isn't automatic.
More importantly...Strix Halo doesn't just need fast RAM--it needs fast RAM that is stable. There's a reason strix halo is soldered--because getting 8000MT/s ram to be stable in SODIMM or DIMM modules is extremely miss or hit (with emphasis on miss). Just replacing solder balls with pins and fingers makes a massive engineering challenge that isn't easily overcome.These need pairs of fast DDR5 sticks to maximize their performance, and prices for fast DDR5 sticks have shot into the stratosphere over the last year.
The advantage in the "AI" branded SoCs as they are now, isn't AI or Copilot. It is being able to have a massive amount of fast memory shared between CPU and GPU. For Strix Halo that is 200GB/s of up to 128GB of memory. A Mac Studio with 128GB of memory would cost over double that, although the memory there would be 500GB/s.Can this be used for a home server with software like Frigate, Immich, and others that take advantage of "AI"? Because that would be much more useful than "Copilot-whatever".
AI and NPUs - in the past also called "digital signal processor" - can be used for useful things, such as graphics up-scaling, mic noise reduction, background blur in video and other stuff.Since it looks like we're going to be saddled with NPUs in our hardware from now on, is there anything useful (i.e. NOT AI) that they can be used for?
It depends on what you mean. I use AI daily, for one thing or another. I would very much like to be able to run it locally, instead of feeding some company all my information. However, even on my (pretty modern) PC, it is just too slow.I've yet to meet someone who actually wants a Copilot+ system. Like, I guess in theory they must exist somewhere. But I haven't seen one in person.
I wonder what ever happened with the CAMM2 standard that looked promising.More importantly...Strix Halo doesn't just need fast RAM--it needs fast RAM that is stable. There's a reason strix halo is soldered--because getting 8000MT/s ram to be stable in SODIMM or DIMM modules is extremely miss or hit (with emphasis on miss). Just replacing solder balls with pins and fingers makes a massive engineering challenge that isn't easily overcome.
Hence all the Strix Halo minipcs having soldered memory. Which is also why Mac Studio and its 500GB/s memory is soldered too.
Part chicken and egg. Partially the RAM crisis.I wonder what ever happened with the CAMM2 standard that looked promising.
I know it started out as a dell laptop spec but JEDEC was looking into it too, not sure if it got adopted as it should solve some of this as its support to offer faster speeds like soldered on but upgradable
Although that is some what moot now given all the ram pricing.
I have a copilot plus ryzen 350. I'm still trying to figure out how to disable all of the bullshit from copilotI've yet to meet someone who actually wants a Copilot+ system. Like, I guess in theory they must exist somewhere. But I haven't seen one in person.
I believe that it is now fully JDECed; though far more niche than their more usual work.I wonder what ever happened with the CAMM2 standard that looked promising.
I know it started out as a dell laptop spec but JEDEC was looking into it too, not sure if it got adopted as it should solve some of this as its support to offer faster speeds like soldered on but upgradable
Although that is some what moot now given all the ram pricing.
The dedicated chips for AI is still useful for normal tasks on Windows, such as photo searching, as it was before the recent generative AI bubble. Machine learning already existed and already had dedicated silicon on, say, the iphones.Waste of die space.
Have you tried the new AI Doritos? They are 25% more expensive but 100% cheesier.Can't wait for the day marketing stops using the 'AI' moniker...
It is an unending war. For my gaming system, I regularly need to run ShutUp10 because updates re-enable all the crap. Ultimately the only way to stop it is to not fight it--and install a different OS. I never see Copilot nonsense on my strix halo system--because it doesn't have Windows on it (Framework has excellent Linux support as such things goes).I have a copilot plus ryzen 350. I'm still trying to figure out how to disable all of the bullshit from copilot
I've yet to meet someone who actually wants a Copilot+ system. Like, I guess in theory they must exist somewhere. But I haven't seen one in person.
I mean there could be also lots of useful AI. The issue is really the software stack, how much RAM these NPUs can access, and how fast that RAM is. I would really like to see better AI in games to make NPCs more believable, or just AI enemies ...be better.
Part chicken and egg. Partially the RAM crisis.
Also...With CAMM2--you only get one card/bus to put all your memory on. So there's no gradual-upgrade path to add-to your memory pool as with desktops with say 2x out of 4x DIMM slots populated. You want to upgrade your CAMM2 memory pool-you're throwing away the entire memory daughter board. So while it is "better" than soldered RAM in terms of upgradeability down the road in that it is at least possible; there's still a big e-waste problem, and also the cost problem as to go from say 64GB to 128GB you have to buy ALL NEW memory chips and can't reuse any of the chips like with SODIMM or DIMM.
In theory...board makers could add more CAMM interfaces, but that would add cost and complexity--that is reserved for platforms like Threadripper. Why? Because a single CAMM2 "slot" already fully uses a dual-channel 128bit bus. Hence the comparison to many-channel platforms like Threadripper.