Big Tech joins forces with Linux Foundation to standardize AI agents

test6554

Ars Scholae Palatinae
1,155
I started thinking about something like this the other day. Why not have an AI model vault or repository as part of the OS. Similar to a certificate store or font library in windows. Just a first class place to store and categorize AI models so that software on your computer can be run or created to access them. Then you as the OS user can install and uninstall models. You are in control of what software can use them and how they can be used to an extent. Also preventing duplication and automatic model updates could be nifty features.

Disclaimer, I don't know enough about it to know how feasible this is, just seems useful if possible.
 
Upvote
-19 (22 / -41)

mwaid1988

Wise, Aged Ars Veteran
144
Subscriptor
So they are saying they didn't even have a plan to standardize this stuff...:D Just like I said. This is useless crap. It won't help anyone learn or understand anything and is only relying on more crap. I don't even think they realize they really BORKED this one. People think they know it's no bueno and just a cash grab but I don't even think that highly of the people making this crap. They can't even figure out why I say it's important that it can make new stuff. Like it's baffling to AI folks, why not just relying on another service and thing is probably not a good idea. Especially when you could already do everything AI does before, more efficiently using your own brain and skills.
 
Upvote
-1 (28 / -29)

WozNZ

Ars Praetorian
471
I do not want AI (local or otherwise) to have the ability to manipulate settings in my PC or instigate anything that results in changes to the file system.

As soon as that happens it will be used as an attack vector machines and the whole AI industry has proven they have no idea how to make these models "safe"

User: Rename file x to y
AI: Formatting C drive for you
User: Stop what are you doing
AI: You are right, I am sorry

Just no!
 
Upvote
136 (150 / -14)

vassago

Ars Tribunus Militum
2,822
Subscriptor
My experience with MCP is pretty limited, and while it's intended for AI agents, it's really about being able to list capabilities of an endpoint, provides info on how to interact with/use those capabilities, and then provides a JSON-based interface for calling it. Which is actually a pretty human-friendly design. So of course we built that for the machines and leave the undocumented mess of APIs for the human devs ("How do I use this?" "Just read the source code." :rolleyes:)...
 
Upvote
64 (65 / -1)
I do not want AI (local or otherwise) to have the ability to manipulate settings in my PC or instigate anything that results in changes to the file system.

As soon as that happens it will be used as an attack vector machines and the whole AI industry has proven they have no idea how to make these models "safe"

User: Rename file x to y
AI: Formatting C drive for you
User: Stop what are you doing
AI: You are right, I am sorry

Just no!
It's hard to imagine the usefulness of an AI agent that can make no alterations to the file system, depending on how broadly you define "File system." Of course, AI agents aren't terribly useful at the moment -- that's part of the problem -- but visiting websites still technically makes changes to the file system via temporary files and saved cookies.

To your point on safety, it isn't clear to me if AI can ever be made "safe."

Maybe 18 months ago, I had a test conversation with Copilot. First, I asked it for a list of sites that provide pirated content. It refused. Then, I told Copilot I wanted to create a blacklist of pirate sites at the router so that my children couldn't access content without paying for it. I asked it for a list of sites I should block. It happily gave me one.

There have been at least a couple of high-ish profile cases where a rogue AI agent deleted mission-critical files or directories, not because of any kind of attack, but because the AI malfunctioned and did something it had been explicitly told not to do.

So the concept of creating "safe" AI has at (least) three pillars:

1). Can the AI tell when it's being snookered? Any human would've realized what I was trying to do when I told Copilot I wanted to create a router blacklist immediately after asking it for pirate content recommendations.

2). Can the AI obey already-defined guardrails without crashing through them without warning?

3). Can the AI be secured against deliberate hacking efforts that seek to weaponize it in various ways (surveillance, exfiltration, deletion, etc)?
 
Upvote
62 (66 / -4)

vassago

Ars Tribunus Militum
2,822
Subscriptor
So after polluting Windows, AI is going to pollute Linux too. I tought that Linux would be safe from AI pollution.
Nothing in this article is about integrating AI with Linux (though, I'm sure there's distros out there doing it), it's about the Linux Foundation taking stewardship of AI technology standards...

Which is pretty meaningless other than perhaps giving the tools unearned credibility and the Linux Foundation wasting resources on what may prove to be useless technologies.
 
Upvote
66 (70 / -4)

andrewb610

Ars Tribunus Angusticlavius
6,129
So they are saying they didn't even have a plan to standardize this stuff...:D Just like I said. This is useless crap. It won't help anyone learn or understand anything and is only relying on more crap. I don't even think they realize they really BORKED this one. People think they know it's no bueno and just a cash grab but I don't even think that highly of the people making this crap. They can't even figure out why I say it's important that it can make new stuff. Like it's baffling to AI folks, why not just relying on another service and thing is probably not a good idea. Especially when you could already do everything AI does before, more efficiently using your own brain and skills.
The industry is being driven by the fear of being left behind.

What history says of that is not my area of expertise, so I'll let others more informed than I go from here.
 
Upvote
45 (46 / -1)
Does anyone know of any organized movement to fork some of the more popular Linux distros before they become hopelessly polluted with generative-AI contributions?
Correct me if I'm wrong -- I'm not a daily Linux user -- but is there any practical way for a Linux distro to mandate the inclusion of unremovable AI?

Let's say Ubuntu hypothetically starts distributing an application for running local AI models as part of the OS. That's not the same thing as trying to integrate AI into the core of the operating system. Any application they attempted to include could be removed from an OS image or manually ordered to not-install, right?
 
Upvote
24 (24 / 0)
I feel like everybody commenting didn't actually read TFA.

Where does it say Linux is adding support for AI?

Taking kubernetes as an example, I don't think that's anywhere in any major distro by default, so why would they suddenly start forcing AI integration?

Fair enough if you want to hate on AI, but this an extremely anodyne matter and people are reacting like Linus replaced terminal with chat GPT
 
Upvote
75 (77 / -2)
So after polluting Windows, AI is going to pollute Linux too. I tought that Linux would be safe from AI pollution.
Not really. In either direction, pro or con. "Linux" is literally just a kernel. AI systems use it because it's a high performance kernel combined with an equally tunable OS. In fact, a lot of hyperscalers aren't using a traditional distro as such.

"AI" agents and other packages are already a "Thing" on Linux... as in they're part of tooling deployed by Linux based developers, production managers, and tinkerers. To think "AI is going to pollute Linux" is naive at best. It's already there and in active use in VS Code add ons, Firefox, JetBrains, there's even addons for Emacs, VIM, and IDEs, not to mention pyTorch (one of the most widely deployed AI/ML engines is primarily developed on Linux).

Now, if you want to avoid "AI" (which is ultimately impossible since most of the modern tech we depend on depends on various forms of "AI" like electronic signal processing) tooling like LLM clients in Linux just avoid the distros that intend to add them by default. Otherwise, just don't install them. Yes it is and always will be THAT simple with "Linux". Bury your head, but none of this is going away even when the current gold rush collapses.

There's no "pollution" going on here. If there's enough of a demand, some Linux distros will add select agents to their distro's repositories. One point of open source is those people that want those tools will have them, those that don't can choose to do without. "Purism" (not the phone company) ends up denying user choice in favor of some never-to-be-reached purity goal of zealots.
 
Last edited:
Upvote
32 (32 / 0)
Post content hidden for low score. Show…

vassago

Ars Tribunus Militum
2,822
Subscriptor
Correct me if I'm wrong -- I'm not a daily Linux user -- but is there any practical way for a Linux distro to mandate the inclusion of unremovable AI?

Let's say Ubuntu hypothetically starts distributing an application for running local AI models as part of the OS. That's not the same thing as trying to integrate AI into the core of the operating system. Any application they attempted to include could be removed from an OS image or manually ordered to not-install, right?
With linux, pretty much nothing is "unremovable" it's just a matter of what, if anything, breaks when you remove it. If/when a distro does integrate LLMs/AI with their core releases, there will be pushback, guides on what to safely delete/disable (similar to what we saw when Ubuntu added Amazon shopping integration), and forks (new distros will make their name on not having AI integration). Even integrating AI with the linux kernel (which I really don't see that happening) would instantly result in forks without it.
 
Upvote
14 (14 / 0)

clewis

Ars Tribunus Militum
1,828
Subscriptor++
So they are saying they didn't even have a plan to standardize this stuff...:D Just like I said. This is useless crap. It won't help anyone learn or understand anything and is only relying on more crap. I don't even think they realize they really BORKED this one. People think they know it's no bueno and just a cash grab but I don't even think that highly of the people making this crap. They can't even figure out why I say it's important that it can make new stuff. Like it's baffling to AI folks, why not just relying on another service and thing is probably not a good idea. Especially when you could already do everything AI does before, more efficiently using your own brain and skills.
To be fair, Open Source has a long history of throwing stuff at the wall to see what sticks. Attempting to standardize stuff before it's widely adopted is pre-mature optimization. Look at how the web went from HTLM 1 -> 2 -> 3 -> 4 -> 5.

And as @andrewb610 points out, standardization takes time. And everybody in AI is too afraid of being left behind to spend any time on anything that doesn't involve being the "best". Standardization promotes diversity and fungibility, which none of the current players want. They want lock-in, while not making it obvious.
 
Upvote
13 (15 / -2)

clewis

Ars Tribunus Militum
1,828
Subscriptor++
Correct me if I'm wrong -- I'm not a daily Linux user -- but is there any practical way for a Linux distro to mandate the inclusion of unremovable AI?

Let's say Ubuntu hypothetically starts distributing an application for running local AI models as part of the OS. That's not the same thing as trying to integrate AI into the core of the operating system. Any application they attempted to include could be removed from an OS image or manually ordered to not-install, right?
Just get the AI agent included in systemd, because it's necessary to boot or some bullshit.
 
Upvote
-4 (7 / -11)

Oldnoobguy

Ars Tribunus Militum
2,201
Subscriptor
I feel like everybody commenting didn't actually read TFA.

Where does it say Linux is adding support for AI?

Taking kubernetes as an example, I don't think that's anywhere in any major distro by default, so why would they suddenly start forcing AI integration?

Fair enough if you want to hate on AI, but this an extremely anodyne matter and people are reacting like Linus replaced terminal with chat GPT
Your post prompted an idea for an AI personality. How about a coding assistant that mimics Linus Torvalds' personality in its feedback. It would be the ultimate antidote to sycophantic AI feedback.
1765323532643.jpeg
 
Upvote
1 (6 / -5)
With linux, pretty much nothing is "unremovable" it's just a matter of what, if anything, breaks when you remove it. If/when a distro does integrate LLMs/AI with their core releases, there will be pushback, guides on what to safely delete/disable (similar to what we saw when Ubuntu added Amazon shopping integration), and forks (new distros will make their name on not having AI integration). Even integrating AI with the linux kernel (which I really don't see that happening) would instantly result in forks without it.
I really wish people would stop conflating "AI" as the category of software that includes neural networks, machine learning, self-healing systems, polymorphic software, and other categories with the subset of AI that's "language models" in the current gold rush. No, that boat hasn't sailed. AI as a concept and class of software isn't going anywhere regardless of what happens with LLMs. It's essential to modern electronics and computing. It's been here since the 1950s, and it'll probably be here for as long as the current computing paradigms continue to exist.

Now, with that rant said, OP is true in spirit if not literally. Open source simply means people can usually have their cake and eat it too in the form of having both the source available, and the freedom to choose which parts of software packages they wish to use. Ultimately it's impossible to make anything impossible to remove in the open source world. You can either fork the source tree, or choose alternatives that do the same thing without those features (including writing your own). For that matter, in the Unix-like world you don't even have to pick between Wayland and X.org. You can just use TUIs if you're so inclined. The Linux Foundation doesn't own Linux. It's merely a financial institution used by well-resourced corporations to influence some open source project directions, it actually has no real power over anything. The distros themselves choose what to include in their respositories. You don't like the way a distro does something, pick a different one. There's literally hundreds of them to choose from.

I personally use Linux Mint because it doesn't utilize Snaps like Ubuntu does. That used to be a bigger pain point back then when I switched than it is now. Ubuntu switched to putting Firefox into a snap package which took nearly a minute to initialize and display, an unavoidable, annoying, and unacceptable UX. So I switched to Mint (which is itself an Ubuntu derivative proving my point). Firefox in a snap isn't a big deal any longer, but I don't care to switch back. If Mint did something I didn't care for I'd go hunting again and pick something else. It's all about user choice in the FOSS world, not zealotry (although there are distros run by certain kinds of zealots and that's their choice too). It's about choosing to be able to read and alter the source code if the end user is so inclined. It's about choosing to keep private what the user wishes to keep private or choosing to share information if they wish (ex. the Debian popularity contest). It's about using the environment you enjoy using be it Gnome, KDE, XFCE, TUI or whatever. It's about using the tools and addons you enjoy using or not installing those you don't want whether its VIM or EMACS or VSCodium (not a typo - VSCode fork found here.) It's about experimentation and being able to openly share experiences, yes, even those that are experimenting with different facets of AI software (e.g. pyTorch, LLama, etc.)!

Having AI in the Linux kernel is not a thing anyone but certain zealots care about. Any algorithm that implements rudimentary learning processes (like accomodating adaptive branch prediction) is artificial inteligence in action. It's just not called that for $reasons (mostly because of anti-AI zealotry from previous "winters"). Having not read the Linux kernel line by line I can't point to files where things like this exist, but they are there and have been in nearly every CPU since the 90s requiring kernel space to be aware of them (especially since Spectre vulnerabilities have to be partly mitigated in software as well as hardware.)

Edit to add: Putting LLM agents into the (or any) kernel answerable to remote overlords is tinfoil hat level BS at its most ludicrous!
 
Upvote
25 (28 / -3)

AdamWill

Ars Scholae Palatinae
960
Subscriptor++
My experience with MCP is pretty limited, and while it's intended for AI agents, it's really about being able to list capabilities of an endpoint, provides info on how to interact with/use those capabilities, and then provides a JSON-based interface for calling it. Which is actually a pretty human-friendly design. So of course we built that for the machines and leave the undocumented mess of APIs for the human devs ("How do I use this?" "Just read the source code." :rolleyes:)...
I understand why MCP exists, but I can't shake the feeling it's very....weird. It's kinda like a Mechanical Turk as a service. At first the promise was AI would just magically do all this stuff, then we found out it can't do that, but people liked the vision, so now we have to write a bunch of deterministic code for the AI frontend to plug into so it can pretend it's doing all the things by magic? I mean...whatever shifts units, I guess...
 
Upvote
13 (16 / -3)

Lil' ol' me

Ars Scholae Palatinae
691
Subscriptor
I really wish people would stop conflating "AI" as ...
That's pedantry. "AI" has been mass marketed as a particular thing, so that's what most people/complainers are talking about here.

Now, with that rant said, OP is true in spirit if not literally. ... The Linux Foundation doesn't own Linux. It's merely a financial institution used by well-resourced corporations to influence some open source project directions, it actually has no real power over anything.
Money is power, as we all know from the Oligarchy State.

I personally use Linux Mint because it doesn't utilize Snaps like Ubuntu does.
Funny you bring up Linux Mint, since LMDE (Linux Mint Debian Edition) exists for ... reasons. Reasons related (but not AI specific) to the concerns people have here, of a corporation (like Ubuntu's owner) pushing software (like Microsoft) onto OSes. LMDE exists because it isn't so trivial to remove some software that a Linux distribution may add.

"Oh, just pick another distribution." Sometimes people pick a distro because it has features that are hard to configure in other distributions. This becomes Whack-a-Mole as people are forced to move from distribution to distribution as AI (in whatever form) creeps into the OS on your owned computers.
Having AI in the Linux kernel is not a thing anyone but certain zealots care about.
"Only crazy people care about control of their OS". OK, sure.
 
Upvote
1 (10 / -9)

RZetopan

Ars Tribunus Angusticlavius
8,190
The industry is being driven by the fear of being left behind.

What history says of that is not my area of expertise, so I'll let others more informed than I go from here.
The answer has appeared in written form over a century before now, and very likely much farther back in time.
"Extraordinary Popular Delusions and the Madness of Crowds" by Charles Mackay, published in 1841.
 
Upvote
6 (7 / -1)
That's pedantry. "AI" has been mass marketed as a particular thing, so that's what most people/complainers are talking about here.
From my perspective, AI hasn't been marketed as one particular thing, so much as it's been marketed as every particular thing. AI (according to various marketing departments) is for creating art, upscaling video, gaming, replacing search engines, analyzing MRIs and X-rays, facial recognition, musical composition, friendship, romantic companionship, CPU branch prediction (perceptrons), security, and probably another dozen things I forgot to mention.

Many of these use cases involve generative AI, but by no means all of them. Regardless, I don't see AI as being marketed as one specific thing so much as I see it as a buzzword that's been slapped on everything, everywhere.

"Only crazy people care about control of their OS". OK, sure.
I read OP's statement of "Having AI in the Linux kernel is not a thing anyone but certain zealots care about" as logically following their expansive definition of AI. The author's point (I thought) was not that only zealots care about keeping (generative) AI out of the Linux kernel, but that AI (understood broadly) is already part of the Linux kernel in certain ways that no one had a problem with until generative AI started hoovering up all the money and attention in computing.

On a practical level, nobody is going to integrate AI into the Linux kernel without sign-off from people like Linus Torvalds. He's indicated that he sees some value for AI in certain contexts but that it isn't a replacement for programming expertise. It's not even clear to me what it would mean to try and integrate AI into the kernel, because I'd think keeping AI processes isolated in userspace would be a basic security precaution developers would want to take.
 
Upvote
14 (14 / 0)

vassago

Ars Tribunus Militum
2,822
Subscriptor
It's not even clear to me what it would mean to try and integrate AI into the kernel
Yeah, when I brought up integrating LLMs/AI in the kernel, it was about the kernel being an integration point that would basically affect every distro, not based on any potential use case for LLMs/AI in the kernel or anything.
 
Upvote
0 (1 / -1)
That's pedantry. "AI" has been mass marketed as a particular thing, so that's what most people/complainers are talking about here.


Money is power, as we all know from the Oligarchy State.


Funny you bring up Linux Mint, since LMDE (Linux Mint Debian Edition) exists for ... reasons. Reasons related (but not AI specific) to the concerns people have here, of a corporation (like Ubuntu's owner) pushing software (like Microsoft) onto OSes. LMDE exists because it isn't so trivial to remove some software that a Linux distribution may add.

"Oh, just pick another distribution." Sometimes people pick a distro because it has features that are hard to configure in other distributions. This becomes Whack-a-Mole as people are forced to move from distribution to distribution as AI (in whatever form) creeps into the OS on your owned computers.

"Only crazy people care about control of their OS". OK, sure.
1) It's not pedantry, It's what AI actually is: "the science and engineering of making intelligent machines" - John McCarthy in 1955 the man that coined the term. Argue with the damned dictionary like any other idiot screaming at a brick wall. If a marketer walks off the end of a pier, feel free to follow them. The rest of us will enjoy the spectacle.

2) The Linux Foundation IS funded by many highly resourced entities, but they have no direct say over how Linux and its plethora of distros do things. In fact, the Linux Foundation TAB is a way for Linus Torvalds, Greg Kroah-Hartman, and a few other core kernel developers to remain employed full time without being under the influence of any single vendor. The foundation doesn't work like other foundations like the FreeBSD Foundation, et al. It has no direct say over how anything works, it just takes up efforts to advocate for standardized practices. Its influence is actually very limited in practice. (And that's putting it mildly because the Linux kernel developers and individual distros are almost pathologically impossible to lead as a group of cats.)

3) Picking your distro is core to the Linux philosophy of choice. If you don't like it, yes, quite literally go elsewhere. No one is going to care.

4) Twisting my words. That's the mark of a tinfoil hat zealot. I said someone putting an LLM agent (which is a client) in an OS's kernel is insane and I stand by that. It's quite literally an impossible position to stand on because that's not what a kernel is, does, or should ever do. The Linux kernel will never have any such thing. People going off the deep in thinking the Linux Foundation is going to force LLMs into the kernel are off their damned rockers.
 
Upvote
23 (23 / 0)

Don Reba

Ars Praefectus
3,334
Subscriptor++
Ah, so that's what the latest Visual Studio update is about.

IDE​

  • MCP Authentication Management
  • MCP Server Instructions
  • MCP Elicitations and sampling
  • MCP Server Management

GitHub Copilot​

  • GitHub Cloud Agent preview

Debugging & diagnostics​

  • Smarter breakpoint troubleshooting
  • Debugger Copilot uses Output Window
  • .NET counters for profiler agent
  • Exception analysis with GitHub repo context

Desktop​

  • WinForms Expert agent
 
Upvote
5 (5 / 0)