Ubuntu disables Intel GPU security mitigations, promises 20% performance boost

Post content hidden for low score. Show…

Fred Duck

Ars Tribunus Angusticlavius
7,248
Dan Goodin said:
Spectre, you may recall, first came to public notice in 2018.
True but MI6 have been fighting them since at least 1961.

Dan Goodin said:
That likely means that people using games and similar apps will see no benefit.
When I was younger, GPUs were strictly for powering entertainment experiences and similar applications. ._.

If the boost only applies for OpenCL & OneAPI Level Zero, does that mean A) entertainment experiences and similar applications were never impacted to start or B ) they will continue being impacted in future?
 
Upvote
114 (117 / -3)

williamyf

Ars Tribunus Militum
2,420
Seems like common sense to me - why waste effort on a finicky timing attack when there are so many easier ways to own most machines, especially in the age of ill-understood chatbots? Fix those first, then worry about this obscure stuff.
This "obscure stuff" was highly important for Cloud PRoviders (like amazon, Microsoft, Google, IBM, Oracle, Vodafone, Telefonica, OVH, et al), as anyone was allowed to buy a VM, have said VM run the Spectre code and exfiltrate data from other SERVER VMs.

Even in internal clouds this was a problem, as a rogue employee who could spin up a VM in the internal cloud could exfiltrate sesitive data from SERVER VMs...

But yes, on a sigle user local (i.e. non cloud) desktop, an obscure and difficult attack.
 
Upvote
122 (123 / -1)

GenericAnimeBoy

Ars Tribunus Militum
1,812
Subscriptor++
Ubuntu users who run a custom Linux kernel without Spectre GPU mitigations should keep the compute runtime level mitigations on, a spokesman for Ubuntu developer Canonical said. These users can build a Compute Runtime themselves with the NEO_DISABLE_MITIGATIONS=false flag added.
So one re-enables the mitigations by setting the 'disable mitigations' flag to false? Is that kind of double negation common in naming things like build flags?
 
Upvote
52 (54 / -2)
D

Deleted member 1085004

Guest
Has there been a known POC of an exploit that could install a backdoor or ransomware on a computer or server with Spectre yet? I think for consumer use, it's probably not a major concern. If this were a DOD Server facing an APT from a nation state with plenty of time on it's hands to develop a zero day exploit though, it would be quite a bit different.
 
Upvote
-3 (8 / -11)
“The economics just don't stack up for attackers, especially when there are so many lower-effort higher-reward attack approaches they can throw at stuff.”

This is the main reason why it's unnecessary. Even with Linux there's a lot of ways to attack a system that doesn't include having to meticulously engineer a solution that will only work on certain CPU/GPU revisions that won't be easy to deploy in widespread attacks.

It's far easier to engineer users or target known or newly engineered exploits against software stacks in widespread deployment like WordPress plugins, known buffer overflows, cross-site scripting, SQLinjection (yes, this is still one of the top 5 attacks so many bad programmers don't sanitize input), misconfigured cloud and on-prem services, and other very low hanging fruit.

Most Spectre type attacks are only useful against datacenters where a potentially hostile container or VM is running on the same hardware as the victim. While it's true no one has spotted Spectre attacks in the wild, the nature of Spectre class attacks is that you pretty much have to come across the explicit code in some kind of code leak to even know. The absence of evidence here is not evidence of absence.

Edit to add: Warning, turning off Spectre mitigations across the board on Linux and Windows can lead to degraded performance on newer CPUs from AMD and Intel because they're designed to work with the existing mitigations at time of release in place! The link is for Zen4 but there are other examples out there for (I believe) Zen 3 and some recent Intel CPUs)
 
Last edited:
Upvote
57 (57 / 0)

AssKoala

Smack-Fu Master, in training
59
Subscriptor++
So one re-enables the mitigations by setting the 'disable mitigations' flag to false? Is that kind of double negation common in naming things like build flags?
A lot of build systems default flags to 0/false if not explicitly defined.

So, you'll often write a double negative so the default situation is to have the "safe" or "previous" behavior rather than the new behavior with minimal changes to the code.

I don't know that the code looks like this, but as an example:

C++:
// ... Stuff ...
#if !NEO_DISABLE_MITIGATIONS // ! means invert the value, so false becomes true
mitigateSpectre();
#endif
// ... Stuff ...

Obviosuly, this varies -- sometimes a standard might say only positive flags, for example, or it might not say anything about it all so you end up with a mix, but it's a really common scenario.
 
Upvote
31 (31 / 0)
Post content hidden for low score. Show…
So one re-enables the mitigations by setting the 'disable mitigations' flag to false? Is that kind of double negation common in naming things like build flags?
If the default is a flag being "on" or "enabled" you would just drop the option line that turns them off. Easy peasy.

Can these protections be disabled in Windows, if so desired?
There are ways with registry keys to disable some of the mitigations yes. But, despite what you see Ubuntu doing, don't do this unless you absolutely know what you're doing! If you have to ask, you don't know what you're doing. :p Ubuntu is talking about a very specific workload very few desktop users will be performing on a regular basis. It has nothing to do with gaming, frame rates, or general purpose computing like email, games, and word processing. This is about compute workloads that run almost solely on a GPU like accelerated video transcoding and editing, 3D art rendering (NOT games, I'm talking about Maya, Blender, etc), brute forcing one way hashes (password cracking), etc.

As I said a few posts above this one, willy nilly disabling Spectre mitigations on newer CPUs will hurt performance rather than helping it. I wouldn't be surprised if this is true on Qualcomm and Apple CPUs and associated GPUs at this point as it is for AMD/Intel.
 
Upvote
6 (9 / -3)

hillspuck

Ars Scholae Palatinae
2,179
Oh wow. Why did anyone think of that before? You've solved computer security!
It's a shame to see (currently) your post upvoted so heavily and the one you responded downvoted. That's literally the solution in this case, and what the article points out and what the Ubuntu decided was the solution. The poster was very specific in saying that this is an appropriate response to this particular problem in single-user systems. They know it's not applicable to every platform and never claimed it was the one single solution.

But if you'd rather sink 20% of your gpu performance into risks that aren't a reality for your system, go for it I guess.

(Though note the caveats that A_Very_Tired_Geek posted above.)
 
Upvote
-4 (23 / -27)

kliu0x52

Ars Scholae Palatinae
757
Oh finally, people are starting to act rationally. Spectre has been overblown by the tech media from the start. People hear about "unfixable flaw in the CPU!" and panic without realizing just how many practical challenges there are to pulling off a successful exploit using Spectre. It's always been mostly a lab curiosity with almost zero potential impact for typical home usage.
 
Upvote
0 (19 / -19)
Oh finally, people are starting to act rationally. Spectre has been overblown by the tech media from the start. People hear about "unfixable flaw in the CPU!" and panic without realizing just how many practical challenges there are to pulling off a successful exploit using Spectre. It's always been mostly a lab curiosity with almost zero potential impact for typical home usage.
Replace "Spectre" with "Y2K" in your statement and tell me if you still agree with what you said.

When technology works, no one notices. And thats the point.
 
Upvote
-1 (11 / -12)
Oh finally, people are starting to act rationally. Spectre has been overblown by the tech media from the start. People hear about "unfixable flaw in the CPU!" and panic without realizing just how many practical challenges there are to pulling off a successful exploit using Spectre. It's always been mostly a lab curiosity with almost zero potential impact for typical home usage.
There have been plenty of PoCs including browser-based attacks with Javascript so it's well beyond "theoretical issue"

However, a lot of layers of mitigations have been put in place so perhaps it's ok to roll some of the layers back to regain some performance.

ex https://security.googleblog.com/2021/03/a-spectre-proof-of-concept-for-spectre.html
 
Upvote
17 (18 / -1)

mdrejhon

Ars Praefectus
3,108
Subscriptor
Just remember that JavaScript programs are still programs!
HTML standardized some Meltdown/Spectre mitigations (Mozilla Standard Security Requirements).

For example, performance.now() is limited in precision to 0.1ms (Chrome) or 1ms (FireFox/Safari).

Your web server can enable some complicated settings (e.g. COOP/CORS headers) to turn this off and improve precision to up to as good as 5 microseconds, in a strong sandbox mode.

This special web server setting disables most Meltdown/Spectre mitigations for a specific webpage -- gaining performance.now() timestamping APIs (5-20 microseconds depending on browser) as well as shared array buffers in JavaScript/WASM (shared arrays are great for WebWorkers .js multithrading). But, there are side effects such as inability to use external APIs, analytics, ad networks, third party frameworks, etc (unless hosted/routed through your domain name instead of an external one).

The webserver-triggered strong sandbox mode automatically enabled in browsers requires the stirct COOP/CORS headers that limits the javascript's ability to communicate to only its origin server*, in exchange for the unlock of Meltdown/Spectre.

Very few websites use this. TestUFO 2.2+ Beta now uses COOP/CORS for some display motion test pages such as the Animation Time Graph, so I've had to become familiar with this little known server-triggered strict-sandbox mode that modern browsers now have.

More information about programming a webserver/javascript to disable Meltdown/Spectre mitigations: window.crossOriginIsolated on MDN

EDIT: *Or a third party server that authorizes YOUR other domain name in all HTTP headers as an authorized origin (doable if you own both domain names, then they can emit headers authorizing each other, for things like API communications without Meltdown/Spectre) -- two different domain names declaring each other as the same origin -- so they can talk to each other (e.g. APIs) -- it is much harder in the strict sandbox mode since you really have to be careful with all the HTTP headers on all the correct HTTP requests and HTTP headers are automatically cached by browsers/intermediaries like CloudFlare. Tricky, but enables high performance shared arrays buffers that you can do javascript multithreading via WebWorkers to handle shared data faster across threads. And other goodies normally disabled. Due to the multithreading shared-data performance benefits, some 3D videogame websites now use upgraded to COOP/CORS, it's a new security domain to learn.
 
Last edited:
Upvote
25 (25 / 0)

Auie

Ars Scholae Palatinae
2,114
.... Just don’t download and then execute a virus.
Philosoraptor.jpg
 
Upvote
10 (13 / -3)
So one re-enables the mitigations by setting the 'disable mitigations' flag to false? Is that kind of double negation common in naming things like build flags?
My view is that the default, expected (and most often safe) value should be 0. So if a value isnt set, the default, expected and safe behaviour happens. So, you would need to actively change the default behaviour for it to take effect; either disable a security feature, enable some extra experimental feature etc.

However, I personally prefer that features are ENABLED (and thus you DISABLE it by changing a config file), I do see lots of cases where its probably more correct to disable stuff...

Example, password policies on a system: Should you:
- DISABLE strong password policy
or
- ENABLE weak password policy

?
 
Upvote
5 (5 / 0)

Excors

Ars Centurion
366
Subscriptor++
Ultimately, cryptography engineer Sophie Schmieg said, the benefit of the mitigations isn't worth the performance costs to GPU performance, where predicting instruction branches is more critical than for CPU performance.

“The system can effectively parallelize a lot more actions without requiring expensive synchronization points between the cores,” Schmieg said. “If anything, something massively parallel like a GPU wants to do branch prediction even more liberally than a CPU.”
I'm pretty sure this is wrong. GPUs don't do branch prediction or speculative execution. Being massively parallel means they can happily stall a thread for dozens or hundreds of cycles while resolving branches and memory accesses, since they can find plenty of other threads to keep the execution units busy in the meantime. Speculative execution is needed on CPUs because each core runs only a single thread (or maybe two), and they have to do lots of fancy tricks to extract a modest amount of parallelism from it and minimise the impact of stalls.

The NEO_DISABLE_MITIGATIONS flag is simply disabling the C++ compiler features that insert Spectre mitigations like retpolines into the library's CPU code. I don't believe it has any effect on the GPU itself, and won't improve performance for applications that are bottlenecked by the GPU; it's just going to reduce the API overhead.
 
Upvote
8 (8 / 0)
It's a shame to see (currently) your post upvoted so heavily and the one you responded downvoted. That's literally the solution in this case, and what the article points out and what the Ubuntu decided was the solution. The poster was very specific in saying that this is an appropriate response to this particular problem in single-user systems. They know it's not applicable to every platform and never claimed it was the one single solution.

But if you'd rather sink 20% of your gpu performance into risks that aren't a reality for your system, go for it I guess.

(Though note the caveats that A_Very_Tired_Geek posted above.)
I think people are just responding negatively to the smug “don’t download and execute a virus” thing, which is like, eh, fair game, it is a bit smug. But, also, it has always been the obvious and only real solution (for regular users who don’t need to act as a host).

Hardware barely works for programs that want to run correctly. It is an accumulation of 40 years worth of hacks, where for the first 30 nobody gave any real thought to protecting the user against the programs they decided to run. Trying to add security after the fact is hopeless.
 
Upvote
0 (1 / -1)

CutterX

Smack-Fu Master, in training
2
It's a shame to see (currently) your post upvoted so heavily and the one you responded downvoted. That's literally the solution in this case, and what the article points out and what the Ubuntu decided was the solution. The poster was very specific in saying that this is an appropriate response to this particular problem in single-user systems. They know it's not applicable to every platform and never claimed it was the one single solution.

But if you'd rather sink 20% of your gpu performance into risks that aren't a reality for your system, go for it I guess.

(Though note the caveats that A_Very_Tired_Geek posted above.)
Every system is multi-user unless you're running it as root.
 
Upvote
-14 (0 / -14)