Linux bitten by second severe vulnerability in as many weeks

UserIDAlreadyInUse

Ars Tribunus Angusticlavius
7,800
Subscriptor
I am a bit confused by the mentions of virtual machines here. The article almost makes it sound like this is a VM escape vulnerability, but I don't think that's the case based on other sources.
It's not a VM escape mechanism; only a container-escape mechanism. It only affects systems running with a shared kernel, not systems running fully compartmentalized.
 
Upvote
176 (176 / 0)

GFKBill

Ars Tribunus Militum
2,968
Subscriptor
Shortly after the disclosure, someone else leaked key details
Anonymously I presume?

Are we going to see a tsunami of these, with AI code review? Can't find any indication that's how this exploit(s) was discovered, other than being inspired by CopyFail, but seems likely.
 
Upvote
61 (61 / 0)

SirOmega

Ars Tribunus Angusticlavius
6,215
Subscriptor++
So whatever happened to that 90 day rule? I thought people who found these sorts of exploits were supposed to tell the vendors/maintainers, and given them 90 days to get the patches out (and in this case, the distros) before disclosing vulnerabilities.

Is that not a thing anymore?
 
Upvote
93 (96 / -3)
Post content hidden for low score. Show…
So whatever happened to that 90 day rule? I thought people who found these sorts of exploits were supposed to tell the vendors/maintainers, and given them 90 days to get the patches out (and in this case, the distros) before disclosing vulnerabilities.

Is that not a thing anymore?
From what I read elsewhere, it was someone reverse engineering the patch (Linux being open source) to figure out the exploit and then posting it online to let the world know prior to the 90 days.
 
Upvote
123 (126 / -3)
you know, I'd be much more upset about the reboot if Linux was like Windows via my work laptop and I had to restart every time something sneezed. But I haven't restarted my Linux desktop box in 4 months and my raspberry pi that is my self-host routing source just reported crossing a year of uptime.

Which is to say in the past seven days I'll have rebooted my work laptop due to forced restarts more often than the sum of my Linux machines in 2026.
You mean don’t enjoy 3 or more nested reboots during Windows patches? I always assume it’s the cold revenge for not having to reboot after every app install anymore.
 
Upvote
12 (35 / -23)
Post content hidden for low score. Show…
Post content hidden for low score. Show…
Post content hidden for low score. Show…

ryanr

Ars Centurion
217
Subscriptor
Both this and copyfail use kernel extensions that I have no use for on my servers. These exploits demonstrate that keeping around unused kernel modules increases attack surface. Disabling them the moment an exploit is announced is the best I can do right now, but what happens when a bad actor finds one of these first rather than a security researcher?

Maybe if these sort of exploits continue showing up with some regularity Linux will get a mechanism to only allow modules that are explicitly whitelisted, rather than the modprobe blacklist mechanism? Or maybe there's already a way to do this? I mean, one can simply delete the unused modules, but since they are all part of the same linux-image package in Debian that's hardly an elegant solution.
 
Upvote
85 (87 / -2)

AdamWill

Ars Scholae Palatinae
958
Subscriptor++
"The best response for anyone using Linux is to install patches immediately. While fixes likely require a reboot, protection from a threat as severe as Dirty Frag outweighs the cost of disruptions."

Honestly, as a distro maintainer (I'm the Fedora quality team lead), I would take a less urgent line than this for most folks. For single-user systems a root privilege escalation vuln is not really a huge deal. You are relatively unlikely to be vulnerable to it (because untrusted parties likely don't have shell access to your system), and root privilege separation is less of a big deal to you; the most likely user account to be compromised on your system is your own, and that's already where all your stuff is. If somebody compromises your user account they don't need a root privilege escalation exploit to do bad stuff to you.

This class of vuln is much worse news for anyone administering a system with multiple users. (Though I don't think it's been indicated that this one is usable for container or VM escape in common scenarios, which is a blessing).
 
Last edited:
Upvote
247 (251 / -4)

dreilide

Smack-Fu Master, in training
98
It's not a VM escape mechanism; only a container-escape mechanism. It only affects systems running with a shared kernel, not systems running fully compartmentalized.
Yeah, the Google page Dan links to provides absolutely no reasoning for mentioning virtual machines.
 
Last edited:
Upvote
29 (31 / -2)

el_oscuro

Ars Praefectus
3,176
Subscriptor++
Both this and copyfail use kernel extensions that I have no use for on my servers. These exploits demonstrate that keeping around unused kernel modules increases attack surface. Disabling them the moment an exploit is announced is the best I can do right now, but what happens when a bad actor finds one of these first rather than a security researcher?

Maybe if these sort of exploits continue showing up with some regularity Linux will get a mechanism to only allow modules that are explicitly whitelisted, rather than the modprobe blacklist mechanism? Or maybe there's already a way to do this? I mean, one can simply delete the unused modules, but since they are all part of the same linux-image package in Debian that's hardly an elegant solution.
As someone who as performed pen tests, old/unused code represents some of the most juicy exploits we find. I once found something like this in code our organization hadn't used in over 10 years. But it was still installed, providing the attack surface.
I like your your whitelist idea, maybe have a whiitelist.d directory and some commands for managing it.
 
Upvote
50 (50 / 0)
I own a very successful business using Mac's. This restarting/rebooting thing. What is that? According to the Linfets, Linux is totally secure!

In real life, NO operating system is secure. Remember the days when using the 'net was fun? And all we had to worry about was flashing banner ad's? What went wrong....
You do realize that Linux is the operating system powering most of the servers on the internet, right? Linux isn’t just some desktop os with tiny market share.

The people who would need to worry about reboots are sysadmins who have to plan down time.
 
Upvote
133 (135 / -2)

Arstotzka

Ars Scholae Palatinae
1,242
Subscriptor++
So if one were to rent an ec2, one could get root and then move laterally through AWS? Is that the thing?
No. You already get root on an EC2 instance.

But if you found someone else’s EC2 instance that was vulnerable — say, a Jenkins server doing CI/CD — you could move up from the restricted user they should be running Jenkins as up to root.

Hell, You could compromise your company’s own infrastructure if you were so inclined, then you wouldn’t even need to go find infrastructure to pwn.
 
Upvote
48 (48 / 0)

adamsc

Ars Praefectus
4,278
Subscriptor++
So if one were to rent an ec2, one could get root and then move laterally through AWS? Is that the thing?

No - your EC2 instance doesn’t share a kernel memory space with the host hypervisor, and these attacks work by getting something into the kernel’s cache and then modifying it so e.g. when it next executed /bin/su the code which actually runs is not what it originally loaded from the file system.

What this is nasty for are things like CI/CD servers or platform-as-a-service companies where untrusted code runs on the same kernel across multiple users.
 
Upvote
36 (36 / 0)

AdamWill

Ars Scholae Palatinae
958
Subscriptor++
No - your EC2 instance doesn’t share a kernel memory space with the host hypervisor, and these attacks work by getting something into the kernel’s cache and then modifying it so e.g. when it next executed /bin/su the code which actually runs is not what it originally loaded from the file system.

What this is nasty for are things like CI/CD servers or platform-as-a-service companies where untrusted code runs on the same kernel across multiple users.
It's also nasty if you're, say, a community-based Linux distribution with a shared server that anyone with a project account can ssh into. And an infrastructure setup where there's a server that a larger group of semi-trusted people have user-only access to, but a smaller group of trusted people have root access to, from which you can access any other server as root if you're root, but not if you're only a user.

Boy do we have fun with these. Especially when they get turned into 0-days...:rolleyes:
 
Upvote
60 (61 / -1)

JimDavis

Smack-Fu Master, in training
85
Both this and copyfail use kernel extensions that I have no use for on my servers. These exploits demonstrate that keeping around unused kernel modules increases attack surface. Disabling them the moment an exploit is announced is the best I can do right now, but what happens when a bad actor finds one of these first rather than a security researcher?

Maybe if these sort of exploits continue showing up with some regularity Linux will get a mechanism to only allow modules that are explicitly whitelisted, rather than the modprobe blacklist mechanism? Or maybe there's already a way to do this? I mean, one can simply delete the unused modules, but since they are all part of the same linux-image package in Debian that's hardly an elegant solution.
There's a recent "killswitch" proposal (https://lore.kernel.org/all/20260507070547.2268452-1-sashal@kernel.org/) that might be helpful if it's adopted.
 
Upvote
13 (13 / 0)

JimDavis

Smack-Fu Master, in training
85
"Some Ubuntu configurations use AppArmor to prevent untrusted users from creating namespace contents. That, in turn, neutralizes the ESP technique." My stock Ubuntu 26.04 laptop allows creating unprivileged user name spaces -- as an experiment I tried disabling that feature (as per https://access.redhat.com/security/vulnerabilities/RHSB-2026-003) and it broke Firefox sandboxing. Which pretty much breaks Firefox, period.
 
Upvote
7 (7 / 0)

Shavano

Ars Legatus Legionis
69,067
Subscriptor
Anonymously I presume?

Are we going to see a tsunami of these, with AI code review? Can't find any indication that's how this exploit(s) was discovered, other than being inspired by CopyFail, but seems likely.
I was assured that AI was going to bring us more secure and reliable software, not first to successfully attack existing security protection.

Oh well.
 
Upvote
21 (26 / -5)

Maltz

Ars Scholae Palatinae
1,032
I was assured that AI was going to bring us more secure and reliable software, not first to successfully attack existing security protection.

Oh well.
Why would anyone expect it to do one without also doing the other? Like so many things, it's just a tool, if a particularly powerful one. Whether it's used for good or ill is up to the person using it.
 
Upvote
28 (30 / -2)

balazer

Ars Praetorian
480
Subscriptor
Dan, you got the timeline wrong.

On 2026-04-29, Hyunwoo Kim reported the vulnerability to the Linux kernel maintainers. On 2026-5-5, a kernel fix was published. On 2026-05-07, Kim notified Linux distro maintainers about the vulnerability, and a disclosure embargo was set for 5 days: no maintainer was supposed to publicly disclose details of the vulnerability for those 5 days, and once the 5 days were up, Kim would be free to disclose. On 2026-05-07, someone else independently analyzed the fix, figured out vulnerability, and not being aware of the embargo, published the details along with exploit code. So the embargo was broken: the secret was out. After that, Kim got permission from distro maintainers and publicly disclosed the details along with mitigation instructions.

There was no leak. It was not a zero-day. It's just another case of Linux distributions being too slow to take up kernel fixes.

Kernel changes are public the moment they're published. Even if a change isn't described as a security fix, people will figure out soon enough what the security implications are. Even if the good guys don't announce it, the bad guys will be exploiting it. The clock is ticking to get fixes deployed.
 
Upvote
73 (76 / -3)

McTurkey

Ars Tribunus Militum
2,245
Subscriptor
I was assured that AI was going to bring us more secure and reliable software, not first to successfully attack existing security protection.

Oh well.
It will do that, but there will also be a lot of new exploits discovered. The hope is that the overall balance tips towards security for a change, because once the low-hanging fruit are caught out by early models, the resources to run the really advanced models will mostly only be available to white-hat hackers.
 
Upvote
9 (10 / -1)

kickahaota

Seniorius Lurkius
44
Subscriptor++
You mean don’t enjoy 3 or more nested reboots during Windows patches? I always assume it’s the cold revenge for not having to reboot after every app install anymore.
I worked on Windows Update for a big chunk of my career at Microsoft. Which meant that I worked on scheduling/downloading/staging the updates; I didn't work on the Component-Based Servicing, the code that actually installed Windows patches. But I certainly got to talk with the folks who did.

I didn't envy them. So much of that's made Windows popular is layer after layer after layer of back-compatibility, to make sure that old software and drivers keep working... which is great, unless you want to change things. There were things that would have sped up reboots quite a bit (or even eliminated them in some cases), but they didn't happen because they'd have broken back-compat.

In my mind, the current problem isn't the nested reboots. The vast majority of users care about the total time between clicking "Restart Now" and getting their computer back. They don't care what happens in between, as long as it works.

The problem is that we had that time really trimmed down during the Windows 7 and Windows 8 days, and now it's crept back up enough to be annoying.

(For those who don't remember: It used to be that very little of the installation work happened until you restarted. Which meant that updates could take well over five minutes on slow hard drives. Then Component-Based Servicing got smarter, and could put almost all of the new OS files in place before prompting you to restart; that made the restart much quicker. And, of course, SSDs became universal, which made all the file IO much, much faster. Now there's newer installer technology that has a lot of technical benefits, but it's back to doing a lot more work during the update-and-restart sequence, so everything has slowed down again.)
 
Upvote
118 (118 / 0)

Charles Hunter

Smack-Fu Master, in training
71
I own a very successful business using Mac's. This restarting/rebooting thing. What is that? According to the Linfets, Linux is totally secure!
What, pray tell, is a "Linfet"? Sounds pejorative. Given the context, I doubt it's a linear field-effect transistor.

I run both macOS and Linux (Debian) systems. The primary reason both platforms don't have uptimes measured in years is because my local electricity supplier can't chew gum and shunt electrons simultaneously. Some grid outages just exceed the capacities of my UPSes, forcing me to shutdown.

But. When it comes to keeping the OSes patched, it's chalk and cheese.

I can't actually remember Debian telling me to reboot after an update. Updates are fast and reliable, to the extent that it's routine to run an update/upgrade cycle on each machine once a week, in the knowledge there will be zero downtime and any interruption to a particular service will be milliseconds at most. A Debian machine has never bricked on me after an upgrade, requiring DFU hocus-pocus. No Debian machine has ever silently enabled features I turned off before. Neither does Debian suddenly go all nanny-state on me like friggin' Gatekeeper does on macOS.

I will allow that macOS (and iOS) updates are slightly less frequent but they take significant fractions of eternity to download and apply, involve multiple reboots, and take each machine out of service for the better part of an hour. Then you hold your breath and hope the machine didn't get bricked along the way, after which you have to deal with whatever "welcome" and "set up this feature you never wanted" screen(s) that some marketing twat decided to foist upon you. Even if you say "no" to everything, the twat has a second go to make sure you didn't really mean yes when you said no. And then there's the pernicious silent re-enabling of things you've turned off before. Or Gatekeeper getting all-in-ya-face with some fresh idiocy (yes, ducky, it's my Mac, my network and my printer and, FFS, I do actually want to print that document, so kindly shut-the-F-up and get on with it). After all that, you might, finally, get back to productive work.

Still, let's not forget the AppleID farce where upgrading the OS on one bit of kit you've had for years gets announced, on every other device, as a "new" iWhatsIt that just joined your party.

I'm fortunate enough to have a 1gbps fibre connection so at least Apple updates download fairly quickly. But I also look after the tech needs of a family member who is stuck behind 20mbps aDSL. That remote location has a Raspberry Pi terminating a VPN for remote access. I can easily keep that RPi fully up-to-date. But the macOS and iOS devices? Forget it. There's just not enough bandwidth for multi-gigabyte update packages.

For macOS I could prepare a USB drive and send it by post. There's no similar option I'm aware of for iOS. Even were I to use USB "sneakernet", upgrading macOS would still strike the long downtime, bricking anxiety (zero ability to recover that remotely), first boot nonsense, silent re-enabling rubbish, Gatekeeper foolishness, and so on.

On top of all that, we all have to put up with the annual ritual where yet more twats prance about on stages to try to persuade us that changing the entire UI and slapping some "insipid glass" label on the side is actually going to be of any use to 95 year old Alzheimers sufferers who can't actually learn new things, and for whom the only viable options are "change nothing" or "take away their computer".

When you compare my Apples and Debians, which OS do you reckon has the better chance of having the latest security patches installed?
 
Upvote
28 (44 / -16)

entropy_wins

Ars Tribunus Militum
1,698
Subscriptor++
"The best response for anyone using Linux is to install patches immediately. While fixes likely require a reboot, protection from a threat as severe as Dirty Frag outweighs the cost of disruptions."

Honestly, as a distro maintainer (I'm the Fedora quality team lead), I would take a less urgent line than this for most folks. For single-user systems a root privilege escalation vuln is not really a huge deal. You are relatively unlikely to be vulnerable to it (because untrusted parties likely don't have shell access to your system), and root privilege separation is less of a big deal to you; the most likely user account to be compromised on your system is your own, and that's already where all your stuff is. If somebody compromises your user account they don't need a root privilege escalation exploit to do bad stuff to you.

This class of vuln is much worse news for anyone administering a system with multiple users. (Though I don't think it's been indicated that this one is usable for container or VM escape in common scenarios, which is a blessing).
My biggest worry when I read these is cloud - not mine but my banks, etc.
Centos/Ubuntu/Debian on Google are pretty good, not so involved in azure/AWS. Thanks for the perspective, a nice Ars comment.
 
Upvote
26 (26 / 0)
Yeah, the Google page Dan links to provides absolutely no reasoning for mentioning virtual machines.
The unfounded panic over VMs is just the tip of the iceberg. Sadly, this piece demonstrates a worse-than-LLM level of technical comprehension, reading more like a mad-libs of security buzzwords than a factual vulnerability breakdown.

The author fundamentally conflates a standard Local Privilege Escalation (LPE) with a hypervisor breakout. If a standard user exploits this inside a virtual machine, they get root on that specific guest OS, not the underlying bare-metal host. Even the container threat model is butchered... unless a container is already running privileged or the attacker chains this with a distinct container escape, gaining root inside a restricted namespace doesn’t magically hand over the host kernel.

I hate to say it, but you could feed the raw Microsoft Threat Intel report to an LLM and get a far more coherent, technically accurate summary than what was published here. I understand the pressure of rapid-turnaround security journalism, but prioritizing speed over basic mechanical comprehension does a massive disservice to readers who actually need to assess their infrastructure risk.
 
Last edited:
Upvote
-2 (13 / -15)

dangoodin

Ars Tribunus Militum
1,648
Ars Staff
Yeah, the Google page Dan links to provides absolutely no reasoning for mentioning virtual machines.

You, and the people upvoting you, should actually read the part of the post that says:

However, there is a significant hurdle: the exploit usually requires high-level system permissions, such as CAP_NET_ADMIN. This means exploitation is less likely in hardened containerized environments (e.g., Kubernetes with default seccomp profiles). However, the risk remains significant for virtual machines or less restricted environments.
 
Upvote
20 (27 / -7)

GFKBill

Ars Tribunus Militum
2,968
Subscriptor
(For those who don't remember: It used to be that very little of the installation work happened until you restarted. Which meant that updates could take well over five minutes on slow hard drives.
The rest of your post was some nice insight, thanks!

That "well over five minutes" is a masterful piece of understatement though o_O
 
Upvote
20 (21 / -1)

dangoodin

Ars Tribunus Militum
1,648
Ars Staff
The unfounded panic over VMs is just the tip of the iceberg. Sadly, this piece demonstrates a worse-than-LLM level of technical comprehension, reading more like a mad-libs of security buzzwords than a factual vulnerability breakdown.

The author fundamentally conflates a standard Local Privilege Escalation (LPE) with a hypervisor breakout. If a standard user exploits CVE-2026-43284 inside a virtual machine, they get root on that specific guest OS, not the underlying bare-metal host. Even the container threat model is butchered... unless a container is already running privileged or the attacker chains this with a distinct container escape, gaining root inside a restricted namespace doesn’t magically hand over the host kernel.

I hate to say it, but you could feed the raw Microsoft Threat Intel report to an LLM and get a far more coherent, technically accurate summary than what was published here. I understand the pressure of rapid-turnaround security journalism, but prioritizing speed over basic mechanical comprehension does a massive disservice to readers who actually need to assess their infrastructure risk.

Wow, such confidence and swagger with the all the unnecessary retoric about "worse-than-LLM level of technical comprehension" and such. Please see my response to dreilide above. BTW, CopyFail had a similar container escape capability for Kubernetes and there was a PoC that proved it.
 
Upvote
33 (37 / -4)

jrj

Seniorius Lurkius
42
Subscriptor
"Some Ubuntu configurations use AppArmor to prevent untrusted users from creating namespace contents. That, in turn, neutralizes the ESP technique." My stock Ubuntu 26.04 laptop allows creating unprivileged user name spaces -- as an experiment I tried disabling that feature (as per https://access.redhat.com/security/vulnerabilities/RHSB-2026-003) and it broke Firefox sandboxing. Which pretty much breaks Firefox, period.
It doesn't universally disable unprivileged namespace creation, there is a sysctl for that. Instead it allows enabling unprivileged user namespaces on a per application basis. So Firefox gets a profile that allows it to create unprivileged user namespaces allowing it to create its sandbox, but arbitrary applications don't have that privilege.
 
Upvote
4 (4 / 0)

GFKBill

Ars Tribunus Militum
2,968
Subscriptor
You, and the people upvoting you, should actually read the part of the post that says:
I think it's important to distinguish between a VM itself being vulnerable, and escaping a VM to the Host and/or it's other VM's being possible.

Containers yes (shared kernal), VM's no (discrete kernals), right?
 
Upvote
19 (20 / -1)