The RAMpocalypse has bought Microsoft valuable time in the fight against SteamOS

Status
You're currently viewing only mdrejhon's posts. Click here to go back to viewing the entire thread.

mdrejhon

Ars Praefectus
3,120
Subscriptor
Seems to me that Microsoft isn't going to have much better luck with hardware margins than Valve -- obvious pivot point here is: sell conversion kits aimed at existing Windows 10 hardware that can't run Windows 11. By "conversion kits" I mean: let people order a package that includes a couple of game controllers and a thumb drive that can be used to replace or dual boot into Steam OS with minimal fuss. They'd need to build out their support org, but at this point, that's likely cheaper than trying to compete with hardware.
My turn to roast Microsoft. Hoo boy, here's a VIP-flamey one. It's a whopper... Get your unpopped popcorn and bring it within 30 centimeters of the below roast.

While my RX 5000 series GPUs can have fun in my newer rigs...

...I have a still very-frequently-used (Quoety-McQuoteFace) "Windows 11 Unsupported" (/Quotey) older supplemental gaming rig using i7-7740X PC with an RTX 3080 that runs games better than a newly bought computer today. So I do the unofficial tweak to make Windows forgive the symbolic unsupportedness.

That CPU is... drumroll... about to hit a DECADE OLD and can still do a mean 720fps 720Hz TestUFO.

It even does some supplemental Windows 11 software development, crissakes, using paid Office 365 and other paid Microsoft products. It's good to test stuff on what's still very happily upper-midrange-performance.

It can do a stable all-cores overclock to 5 GHz nicely too, while only a 4C/8T it still runs many games pretty well.

I admonish Microsoft on this lineitem of having obsoleted Windows faster than Apple, which was unusual. My 7740 has been unsupported by Microsoft since Windows 11 release in 2021 -- five bleeping years ago, and only a few years after I had bought the rig.

Well-specced systems today now tend to become durable appliances nowadays. Lasting longer than an average cheap new IoT washing machine with more expedious planned-obscolescence features than this Microsoft-Windows-Unsupported PC, chrissakes!

Unlike yesterday's 3 year upgrade cycles, era-highend CPUs can now be 10-12 year expected lifetimes of "better than midrange feel" (assuming later upgrades like a GPU upgrade, M.2 SSD, and RAM upgrade).

A 10-12 year old CPU (with just other upgrades) is now capable of being cheaply slightly faster at gaming than today's midrange newly purchased PCs because Moore's Law has slowed down so much. While the fabs have fun with AI chips, let us milk our old perfectly fine 10-year-old Ferraris of CPUs that still keeps up with currents.

(P.S. I use a bunch of Valve & Steam branded products too. Valve has made it so that I'm looking forward to Linuxing the i7-7740 because of Microsoft pushing me away from it. That could razor-and-blades to the rest of my computers in a few years eating the newer PCs because it was more fun to use the older PCs. But in this case, Windows 11 clearly still runs speedy on it without even intentionally trying to. Why doff this low lying apple?)

Microsoft's Display Team knows me as Blur Busters, of TestUFO fame -- and I willingly include 10 year old PCs as part of the office fleet that still outperform 0-year-old midrange PCs. I'm the one who convinced Microsoft to support 5000Hz. I'm not expecting Microsoft to allow the ugpraded "line itemey" features on older rigs, but the whole OS, just like that.

Some Samsung Android phones were supported longer than that i7 CPU!

Doesn't Microsoft want my willing ongoing subscription money to continue to milk those aging PCs?

</roast>

Now you may go ahead and eat the popcorn you brought to my Wall-o-Text™.
My roast has sufficiently popped all the kernels. Hope the popcorn is not burnt.
 
Last edited:
Upvote
22 (30 / -8)

mdrejhon

Ars Praefectus
3,120
Subscriptor
I find it hard to believe that a 7740X 4-core Kaby Lake CPU can keep up with a modern midrange CPU like the 9600X or the 250K. The 3080 can keep up with a midrange GPU like the 5060, but it's only 5.5 years old.
I can respect that. But I have fans from literally ~200 countries, pretty much nigh all of them -- who consider midrange differently;

Up front, to quiet those "PC semantics in MY country" torches and pitchforks...

...I have a 5080 and a 5090 as well, so I've got much more modern rigs. The 3080, however, really holds its "worldwide midrange" compared to the average PC GPU that the average country resident can afford.

Even the RTX 5060 is being called "midrange" by Western countries including my country Canada.
But the 3080 is faster for low-latency esports use cases; and has roughly 20% higher PassMark numbers.

Being Canadian with >90% of my fans being overseas and >50% of TestUFO traffic coming from multiple ideographic languages of social media and manufacturers, I'm probably not targetting the "midrange" definition to the same country as you are.

PCMR -- aka PC Master Race community -- respects even low-end machines pretty well (on average anyway) -- and there's a lot of esports players in some countries managing much more inferior rigs than an old ~3080. I cater to all sorts of fans worldwide.

Which authority dictates the definition of "midrange"? Microsoft has been the torchbearer of that in the past, but I argue against that, in my point. Are we accidentally perpetuating that we choose Microsoft as the defacto authority of "midrange" definition?

The definition of "midrange" by ~200-country standards is nebulous. Few disagree that Moore's Law has stretching the midrange timescales longer. My point was not about nitpicking whose worldwide-average definition of midrange (which country, buddy?) and my point was about Microsoft itself instead.

Game optimization has becoming crap, are we perpetuating that? Responsible game optimizers get older rigs. I also have a 3080 to make sure TestUFO blasts 720fps 720Hz, and it even keeps up at 480fps 480Hz on a 1080 Ti too. We don't downvote responsible game optimization. If we optimize better for "another country's average midrange", then our NVIDIA RTX 5090's and AMD 7900 XTX's sing louder with better graphics without lagging.

You know... Well tuned 10year-old rigs with clean Windows reinstalls, can feel faster than a newer rig with a crud-filled Windows 11 install that Microsoft gave an excuse to obsolete faster because Windows 11 crudfills so quick. A clean W11 on 10-year old rig outperforms a crudfilled W11 on much more recent rigs -- we have to reinstall Windows (and also slim its fat) just to keep it a rocket.

Are we really going to PCMR-define "midrange" in a specific country by "crud-filled Windows 11 performance standards"?

As a ~200-country fan bearer, what you can afford in your country is what I respect -- I'm not making fun of your midrange.
 
Last edited:
Upvote
1 (9 / -8)

mdrejhon

Ars Praefectus
3,120
Subscriptor
In raw multi core it can't, but games aren't microbenchmarks. Most modern games are keeping the PS/XB as a baseline and just don't take huge advantage of the latest CPUs. A million e-cores and huge caches isn't going to help games that are barely multithreaded enough to use the 4 big cores in that 7th gen and use engines designed to maximize the few megs of cache in the consoles.
Exactly, and a 3080 still outperforms a 5060 by 20% in PassMark.

It's still out-framerates perfectly fine during organic low-latency esports use cases a lot of my fans are after.

My gorilla silicon (e.g. 5090) can be used for lots fun things, but a 3080 is still damn respectablely "high-end-ish" in more than 50% of countries.

The whole blue marble isn't entirely lucky, and a 3080 still sprays framerate out of the wazoo for generic 400 Hz monitors that now cost only $100-$150 in some countries (~$180-$220 on Amazon USA for lots of generic esports brands). Even 240Hz OLEDs are quickly going to follow suit, the AOC 240Hz OLED just fell south of $350 and an INNOCN miniLED locally dimmed 240Hz for only $200. An RTX 3080 still pampers those babies at the resolutions they sell at, for the specific games they're targeted at.

A 10-year-old CPU with a 5-year old GPU has still won some 2025 online esports championships in some far away countries.

A more upgraded system is obviously better; but for some titles that don't bottleneck on core-count, it's surprisingly razor thin when large number of your competitors in your specific country are using internal or cheaper GPUs. Most competitive esports isn't core-heavy.
 
Last edited:
Upvote
3 (3 / 0)

mdrejhon

Ars Praefectus
3,120
Subscriptor
That's just not true. Techspot recently did a test of the past ten years of Intel CPUs and the 270K gets 206fps average and 157fps 1% lows at 1080p medium in a 14 game test suite, while the 7700K only gets 95fps average and 67fps 1% lows. That's over 2X performance in games. The 250K will be a bit slower than the 270K, but still much faster than the 7740X or 7700K.
Reread the game titles -- not every single title of that list are esports online championship games. Most such games aren't heavily multicore, although some are.

Some of my fans optimize for 1-2 specific games they play, or a whole specific category, and the margins become tighter.

I do still love me a modicum of framegen in the right places, with good game of Cyberpunk 2077 or Wukong, but we're moving goalposts at this juncture. The goalpost opening venn diagram overlaps. I do write about loving framegen ... But that's not today's goalpost.

In years like 1991 and 1996 with the torrid Moore's Law velocity in clock speeds -- the goalpost movement speed was supersonic.

You had 386SX-25 in year 1990 turning into an AMD Athlon 1 Ghz in year 2000. In just a subset of that decade, it was relatively easy for the 10%-ile slowest system of a few years later to outperform the 10%-ile fastest system.

This is not the case today. Even the frame rate numbers you say are less than a 2x difference.

And the frame rate numbers are less than the frame rate numbers of games like Counter Strike and Fortnite type games. There are popular esports games today where you can get an RTX 3080 to blast 400fps organic frame rates for a cheap $150 priced 400Hz generic esports monitor is "good enough" for many. The websites want to earn Amazon affiliate income on the new GPUs and lovely monitors (not a problem, I do too, I earn Amazon affiliate income too). But with due respect, it also incentivizes omitting my point entirely.

VennDiagramOverlap(yourGoalPosts, myGoalPosts); /* still has a goalie opening thanks to pokey Moore's Law. */
 
Last edited:
Upvote
3 (3 / 0)

mdrejhon

Ars Praefectus
3,120
Subscriptor
The sleeping giant in waiting. Fact is Apple make the best gaming PC’s just needs compatibility. Almost every Mac user apart from maybe Neo owners has a gaming laptop that is fanless/near fanless, extremely lightweight and has incredible battery life.
Also, CPU efficiency in some software too.

Since I am forced to do cross platform software -- my crossplatform hybrid C++/Javascript build workflows, mainly powered by POSIX toolchains, my 6-year-old MacBook M1 Pro (top spec) out-compiles the same app (feature parity) on a recent AMD 9800X3D in certain compile workloads. Shockingly so. Build finished sooner, even when threaded, due to several steps single-core-bottlenecked, combined with compile-efficiency-per-GFLOPS, combined with lack of VM layers, and mature high-performing POSIX-compliant (Linux/Mach/UNIX) compile toolchains that "just works" cross platform.

YMMV of course (threaded mixes, Windows-specific MSVC + much faster Visual Studio toolchains that exits the WSL2 VM and create a crossplatform-maintenance headaches when diverging from "pure POSIX on all platforms" workflows, etc). Choose one: Fastest Windows compile OR near-identical compile scripts that just works on PC/Mac/Linux with the same build utilities.

For some crossplat app dev, performance bottlenecks of W11 has now come to the point where I now rather software-develop certain apps on that old MacBook than a more recent Windows 11 development system.

I am so thankful to Microsoft that WSL exists. One of the better thing Microsoft did, made life much easier. I wish Microsoft development toolchains was faster performing, and WSL2 wasn't a slow VM. But its speed is still underperformant for POSIX build toolchains even when using its faster filesystem instead of /mnt/c/ filesystem.

When it comes to it, Linux is fast and nice in most crossplatform POSIX build toolchains. Linux is still (but maybe not for long) less users than Mac users, in a target market prioization point of view. So for that, the MacBook tends to edges out in (crossplatform POSIX compile performance + market penetration) package deal.

Mind you -- if you're compiling Chromium for a few hours under a windows toolchain (not a homogenous POSIX toolchain via WSL/WSL2), then sure, the 9800X3D is faster by ~2X than the MacBook (not bad, considering it's merely a laptop). But for the homogenous crossplatform POSIX development+build toolchain (using WSL2), the desktop W11 machine of 9800X3D + 5080 is far by the slowest versus older Linux/Macs when you're benchmarking certain 100% POSIX-toolchain crossplatform projects using various crossplatform GNU utilities.

It's all about the variables you pit the systems at.
 
Last edited:
Upvote
1 (1 / 0)
Status
You're currently viewing only mdrejhon's posts. Click here to go back to viewing the entire thread.