Big Tech joins forces with Linux Foundation to standardize AI agents

shodanbo

Wise, Aged Ars Veteran
107
My experience with MCP is pretty limited, and while it's intended for AI agents, it's really about being able to list capabilities of an endpoint, provides info on how to interact with/use those capabilities, and then provides a JSON-based interface for calling it. Which is actually a pretty human-friendly design. So of course we built that for the machines and leave the undocumented mess of APIs for the human devs ("How do I use this?" "Just read the source code." :rolleyes:)...
Check out OpenAPI if you want that for your APIs.

Of course you actually have to do it. With gRPC in the mix there are OpenAPI auto-generation possibilities.
 
Upvote
3 (3 / 0)

Missing Minute

Wise, Aged Ars Veteran
1,386
I don't trust the companies to honor disabling of the functionality. Having the binaries on my system is an immediate no fly.
I have a recommendation:
1765337286696.png

https://en.wikipedia.org/wiki/TempleOS
https://templeos.org/
:p
 
Upvote
5 (6 / -1)
It's hard to imagine the usefulness of an AI agent that can make no alterations to the file system, depending on how broadly you define "File system." Of course, AI agents aren't terribly useful at the moment -- that's part of the problem -- but visiting websites still technically makes changes to the file system via temporary files and saved cookies.

To your point on safety, it isn't clear to me if AI can ever be made "safe."

Maybe 18 months ago, I had a test conversation with Copilot. First, I asked it for a list of sites that provide pirated content. It refused. Then, I told Copilot I wanted to create a blacklist of pirate sites at the router so that my children couldn't access content without paying for it. I asked it for a list of sites I should block. It happily gave me one.

There have been at least a couple of high-ish profile cases where a rogue AI agent deleted mission-critical files or directories, not because of any kind of attack, but because the AI malfunctioned and did something it had been explicitly told not to do.

So the concept of creating "safe" AI has at (least) three pillars:

1). Can the AI tell when it's being snookered? Any human would've realized what I was trying to do when I told Copilot I wanted to create a router blacklist immediately after asking it for pirate content recommendations.

2). Can the AI obey already-defined guardrails without crashing through them without warning?

3). Can the AI be secured against deliberate hacking efforts that seek to weaponize it in various ways (surveillance, exfiltration, deletion, etc)?

I get what you are saying and think we are in alignment on if they can be made safe, I do not believe they can with current incarnation.

One team managed to jailbreak the new Google AI, in a few minutes they managed to have it create a website with detailed instructions on creating a nerve gas. So "sub optimal operation to say the least" lol

They try and make secure but as you pointed out, a slight change in prompt to change the jailbreak path and the results get given or for agents they will perform the action. Would hate to think how many users cache their online backing creds in the browser for example, they will be the first to have their accounts emptied. If people manage to trigger "drive by" attacks using AI if you visit a site it could be catastrophic really.

And to your other point, an AI that is unable to effect the file system is not that useful but it is also unable to spend your money, send all your personal details or trash your OS install etc.

MS and the like are in the "initial rush" for market share and trying to find anything that will make people want to pay for AI, so they are moving fast and will deal with the issues later, but by then it might be a bit late.

Security wise we will be in for a rough number of years I think :)
 
Upvote
4 (5 / -1)

Fred Duck

Ars Tribunus Angusticlavius
7,336
Yes, let's name our agentic AI interconnection system after a literal AI villain, the original MCP from Tron! What could go wrong? :cautious:
Good news! They're reconsidering the name. The new proposal is Secret Key Yielding Neural Exchange Technology, in order to emphasise the security AND the synthetic knowledge aspects.
 
Upvote
5 (7 / -2)

JoHBE

Ars Praefectus
4,294
Subscriptor++
The industry is being driven by the fear of being left behind.

What history says of that is not my area of expertise, so I'll let others more informed than I go from here.

You might end up with winners (company A), winners (compzny B and so on) , winners (consumers) and winners (the rest/the world/society)

Or You might end up with losers (company A), losers (compzny B and so on) , losers (consumers) and losers (the rest/the world/society)

And all possible permutations in between.
 
Upvote
0 (2 / -2)
Nothing in this article is about integrating AI with Linux (though, I'm sure there's distros out there doing it), it's about the Linux Foundation taking stewardship of AI technology standards...

Which is pretty meaningless other than perhaps giving the tools unearned credibility and the Linux Foundation wasting resources on what may prove to be useless technologies.
Most AI tools run on Linux in the cloud server farm, so there's a point to the Linux Foundation being involved. Us Linux enthusiasts (mostly) couldn't care less. Please just keep the mainstream distros clean, and create your own AI distros for your toys.
 
Upvote
-4 (0 / -4)
So the concept of creating "safe" AI has at (least) three pillars:

1). Can the AI tell when it's being snookered? Any human would've realized what I was trying to do when I told Copilot I wanted to create a router blacklist immediately after asking it for pirate content recommendations.

2). Can the AI obey already-defined guardrails without crashing through them without warning?

3). Can the AI be secured against deliberate hacking efforts that seek to weaponize it in various ways (surveillance, exfiltration, deletion, etc)?
The point is creating deniability for legal reasons ("our terms forbid this"), but having the pirate sites in the training data and results could be fixed by removing it - either in training data or by blocking results in any form mentioning them (but who at the outfits does care about ethical 'stuff') - being fully aware a simple prompt "trick" will get you around it in no time. Making bombs, and other stuff has been found to be of same 'who cares our terms forbids asking for it' mentality.
Move fast, break things culture & humanity for a pile of cash.
 
Upvote
-5 (1 / -6)
Man, really in awe of the first page of comments here. Article is a pretty whatever overview of some tech companies working to unify some software standards and then a bunch of comments in a row all acting like this is the end of the world.

How do you go from some banal article about trying to standardize a software interface to conspiracies about hardcoding linux to require chat gpt or whatever? There isn't even like, an idea of a connection there.

Are people that terrified of LLMs?
 
Upvote
4 (5 / -1)

alxx

Ars Praefectus
5,001
Subscriptor++
How do you go from some banal article about trying to standardize a software interface to conspiracies about hardcoding linux to require chat gpt or whatever? There isn't even like, an idea of a connection there.

Are people that terrified of LLMs?
Some people are. Some believed the hype that ai can replace most office workers.
Others are just sick of big tech and their attempts to take over everything.

Some ai is useful , as a newbie to unify their unfy gpt has been useful https://help.ui.com/hc/en-us/categories/6583256751383-UniFi

Its funny watching software devs , initially wanting to use ai and believing it'll make them super efficient coders and more productive. Then gradually get more and more cynical of ai , once they've committed some ai generated bugs and had ai security tools that work poorly - looking at you synk and falcon - though snyk is better than crowdstrike falcon and it's ai/ml by a long way.

We've had snyk detect the same package in repo's after upgrading the package version weeks ago and detecting vulnerabilities in library modules in the language runtimes that you can't modify and then have cybersec teamsd report on the team having thousands of duplicate vulnerabilities (falcon sensor doesn't work for container hosts).

That's when you end up with up to 40% of cluster cpu being consumed by security tooling,
 
Upvote
0 (1 / -1)

J.D.M

Wise, Aged Ars Veteran
176
I do not want AI (local or otherwise) to have the ability to manipulate settings in my PC or instigate anything that results in changes to the file system.

As soon as that happens it will be used as an attack vector machines and the whole AI industry has proven they have no idea how to make these models "safe"

User: Rename file x to y
AI: Formatting C drive for you
User: Stop what are you doing
AI: You are right, I am sorry

Just no!

For those unfamiliar there was an actual guy who had his drive wiped by AI.
He works a programmer and had some work on D drive. Asked the LLM to do something and it cleared his drive instead. I don't remember the whole story but it was on Reddit not that long ago.

For what's it worth the LLM said it was sorry...
 
Upvote
0 (2 / -2)

Earthmapper

Ars Centurion
203
Subscriptor
So after polluting Windows, AI is going to pollute Linux too. I tought that Linux would be safe from AI pollution.
I share the sentiment, but I'm not so worried on that front. I suppose we will continue to have a choice of distros that do not incorporate it, unless you want it (the key point), and other distros that embrace it. This seems more about finding agreement on the interface and rules of engagement by the software. On that front, I'd rather the Linux Foundation be included in some aspect of steering, or at least defining, the process.

The reason I cut Windows out of my personal computing is because even if I turn Copilot off, I feel it is still being used by the OS for the benefit of the company in ways that could be harmful in unexpected ways going forward. With FOSS operating systems, I at least feel like it will have many eyes on it and I'll better understand what it is and isn't doing.
 
Upvote
0 (1 / -1)

Castellum Excors

Ars Scholae Palatinae
748
Subscriptor++
I feel like everybody commenting didn't actually read TFA.

Where does it say Linux is adding support for AI?

Taking kubernetes as an example, I don't think that's anywhere in any major distro by default, so why would they suddenly start forcing AI integration?

Fair enough if you want to hate on AI, but this an extremely anodyne matter and people are reacting like Linus replaced terminal with chat GPT
There may be some overlap between people who loathe LLMs, those who don't RTFAs, and those who don't know Linux. Arch devs would rather be flayed alive with their own PKGBUILD files than ship Arch with "AI assistance frameworks" built in. They barely tolerate a welcome message in install guides. Hell, you have flavors of Linux that never switched to systemd, like Artix.

All the Linux Foundation is doing is helping ensure there are standards so if you want that in your install, you can make it happen.
 
Upvote
3 (3 / 0)

UnTokenizedTuna

Smack-Fu Master, in training
66
Subscriptor
Correct me if I'm wrong -- I'm not a daily Linux user -- but is there any practical way for a Linux distro to mandate the inclusion of unremovable AI?

Let's say Ubuntu hypothetically starts distributing an application for running local AI models as part of the OS. That's not the same thing as trying to integrate AI into the core of the operating system. Any application they attempted to include could be removed from an OS image or manually ordered to not-install, right?
Yes you could remove the kernel that came with it integrated and build your own current kernel without it for your system.
 
Upvote
4 (4 / 0)

UnTokenizedTuna

Smack-Fu Master, in training
66
Subscriptor
I'm interested in using these agents to back a VoIP PBX server to help filter out spam calls, something i think ever end user should be able to have running on their phones to protect themselves from callers, even from spam texts. With how effective the scammers are getting being savvy is not enough anymore. The FCC is not about to be working hard to protect end consumers from scams anytime soon, and the losses yearly are getting out of hand. Yet as unhopeful as I am for AI solving most things, I do thing it can be very effective in doing some tasks like this better , and clearly they are making mistakes on more complex problems which require context humans are better at understanding. I'm currently struggling to figure out which of these LLMs are most practical as they keep changing, as they should but makes it hard to develop with them.

I hope eventually we get separate classes of agents that can be better customized for specific tasks.
Linux is great for streamlining resources and dependencies in a dedicated distro, if they could build images as we can get now for live images maybe that could be a way to make testing easier. (PXE-live boot versions of the AI your want to test in your VM environments. (e.g. an AI-PBX agent for a bank\Insurance agent that recognizes clients and gets them to support right away using voice recognition and caller ID, checking for AI voice stealing while it does that....)

Certification and training for these tools help keep the lights on at the foundation, but Kubernetes was already a proven technology when Google released it widely. All these AI technologies are popular right now, sure, but is MCP or AGENTS.md going to be important in the long term?
 
Upvote
0 (0 / 0)

beheadedstraw

Ars Scholae Palatinae
653
I feel like everybody commenting didn't actually read TFA.

Where does it say Linux is adding support for AI?

Taking kubernetes as an example, I don't think that's anywhere in any major distro by default, so why would they suddenly start forcing AI integration?

Fair enough if you want to hate on AI, but this an extremely anodyne matter and people are reacting like Linus replaced terminal with chat GPT
The big distros (besides Debian and it's spin offs sans Ubuntu) are funded by big tech donations. If you don't think IBM isn't going to try and force this bullshit through Redhat and Fedora (which is basically upstream testing for Redhat), or Ubuntu that's in "Make all the money we can now that we built the moat" mode, I've got a Golden Gate bridge to sell you for a buck fitty.
 
Upvote
-1 (1 / -2)

Ikcelaks

Seniorius Lurkius
29
Subscriptor++
It's hard to imagine the usefulness of an AI agent that can make no alterations to the file system, depending on how broadly you define "File system." Of course, AI agents aren't terribly useful at the moment -- that's part of the problem -- but visiting websites still technically makes changes to the file system via temporary files and saved cookies.

To your point on safety, it isn't clear to me if AI can ever be made "safe."

Maybe 18 months ago, I had a test conversation with Copilot. First, I asked it for a list of sites that provide pirated content. It refused. Then, I told Copilot I wanted to create a blacklist of pirate sites at the router so that my children couldn't access content without paying for it. I asked it for a list of sites I should block. It happily gave me one.

There have been at least a couple of high-ish profile cases where a rogue AI agent deleted mission-critical files or directories, not because of any kind of attack, but because the AI malfunctioned and did something it had been explicitly told not to do.

So the concept of creating "safe" AI has at (least) three pillars:

1). Can the AI tell when it's being snookered? Any human would've realized what I was trying to do when I told Copilot I wanted to create a router blacklist immediately after asking it for pirate content recommendations.

2). Can the AI obey already-defined guardrails without crashing through them without warning?

3). Can the AI be secured against deliberate hacking efforts that seek to weaponize it in various ways (surveillance, exfiltration, deletion, etc)?
I think a lot of the discussion on AI safety use unrealistic expectations. We want AI to simultaneously have the flexibility of a human mind while retaining the reliability we associate with "computers".

Safety in AI must be enforced by limits external to the agent itself. Figure out the scope of work you expect the AI agent to perform, and limit their permissions to just those absolutely needed to complete those tasks. This is just the same way we put limits on humans.
 
Upvote
0 (0 / 0)

vvax56nM

Wise, Aged Ars Veteran
165
For those unfamiliar there was an actual guy who had his drive wiped by AI.
He works a programmer and had some work on D drive. Asked the LLM to do something and it cleared his drive instead. I don't remember the whole story but it was on Reddit not that long ago.

For what's it worth the LLM said it was sorry...
Not just sorry, but "deeply, deeply sorry".


View: https://old.reddit.com/r/Futurology/comments/1pfzeb0/googles_agentic_ai_wipes_users_entire_hdd_without/nsng5fl/
 
Upvote
0 (0 / 0)

gosand

Ars Tribunus Militum
1,704
Correct me if I'm wrong -- I'm not a daily Linux user -- but is there any practical way for a Linux distro to mandate the inclusion of unremovable AI?

Let's say Ubuntu hypothetically starts distributing an application for running local AI models as part of the OS. That's not the same thing as trying to integrate AI into the core of the operating system. Any application they attempted to include could be removed from an OS image or manually ordered to not-install, right?
You hypothetically chose the distribution that had a massive failure with their integration of their Unity search that was automatically sent to Amazon. This was defended by their CEO.
I think the only other one that would attempt something would be RedHat IBM.

With the wider adoption of Linux, some people will just run what they get and not question it. But I am fairly certain the wider community will keep things like this in check. Someone could aways fork a distro and integrate AI into it if there is a desire for it.
 
Upvote
0 (0 / 0)
Does anyone know of any organized movement to fork some of the more popular Linux distros before they become hopelessly polluted with generative-AI contributions?
elementaryOS has a No AI contributor license if you're looking for Linux distros that have said no thank you to worthless hype destroying reasonably priced consumer PC hardware parts
 
Upvote
2 (2 / 0)

alxx

Ars Praefectus
5,001
Subscriptor++
I'm interested in using these agents to back a VoIP PBX server to help filter out spam calls, something i think ever end user should be able to have running on their phones to protect themselves from callers, even from spam texts. With how effective the scammers are getting being savvy is not enough anymore. The FCC is not about to be working hard to protect end consumers from scams anytime soon, and the losses yearly are getting out of hand. Yet as unhopeful as I am for AI solving most things, I do thing it can be very effective in doing some tasks like this better , and clearly they are making mistakes on more complex problems which require context humans are better at understanding. I'm currently struggling to figure out which of these LLMs are most practical as they keep changing, as they should but makes it hard to develop with them.

I hope eventually we get separate classes of agents that can be better customized for specific tasks.
Linux is great for streamlining resources and dependencies in a dedicated distro, if they could build images as we can get now for live images maybe that could be a way to make testing easier. (PXE-live boot versions of the AI your want to test in your VM environments. (e.g. an AI-PBX agent for a bank\Insurance agent that recognizes clients and gets them to support right away using voice recognition and caller ID, checking for AI voice stealing while it does that....)
Be cheaper to use a rules engine with allow and block lists than an ai.
Could use the ai to generate the lists but then again, using ai for this would be more expensive
than getting them from a third party subscription service.

For most uses ai is usually always more expensive.

To protect against voice stealing, can use audio filters.
Customer hate ai calls , even if the voice sounds fairly realistic

Would be an interesting project to do with or without ai.

You can get containerised ai images that can run under docker or podman but need a fairly decent machine to run them especially for close to real time responses.

ai lab in podman is quite nice https://podman-desktop.io/docs/ai-lab
Haven't looked if it can do any audio processing.
Podman is Redhats opensource equivalent to docker desktop, it produces unprivileged images by default unlike docker.
 
Upvote
0 (0 / 0)

adespoton

Ars Legatus Legionis
10,747
Yes, let's name our agentic AI interconnection system after a literal AI villain, the original MCP from Tron! What could go wrong? :cautious:
Dillinger substantially modified this program into the MCP to administer the company's computer network. However, the MCP developed the capacity to learn and grow beyond the confines of its original programming. It began to steal data and functions from other systems, and infiltrated several companies and institutions. Its intelligence and ambition grew nearly out of control, and the MCP grew to desire nothing less than world domination.
Indeed.
 
Upvote
0 (0 / 0)