Check out OpenAPI if you want that for your APIs.My experience with MCP is pretty limited, and while it's intended for AI agents, it's really about being able to list capabilities of an endpoint, provides info on how to interact with/use those capabilities, and then provides a JSON-based interface for calling it. Which is actually a pretty human-friendly design. So of course we built that for the machines and leave the undocumented mess of APIs for the human devs ("How do I use this?" "Just read the source code.")...
I don't trust the companies to honor disabling of the functionality. Having the binaries on my system is an immediate no fly.As long as I can disable this stuff without issue I really don't care much.
I have a recommendation:I don't trust the companies to honor disabling of the functionality. Having the binaries on my system is an immediate no fly.
It's hard to imagine the usefulness of an AI agent that can make no alterations to the file system, depending on how broadly you define "File system." Of course, AI agents aren't terribly useful at the moment -- that's part of the problem -- but visiting websites still technically makes changes to the file system via temporary files and saved cookies.
To your point on safety, it isn't clear to me if AI can ever be made "safe."
Maybe 18 months ago, I had a test conversation with Copilot. First, I asked it for a list of sites that provide pirated content. It refused. Then, I told Copilot I wanted to create a blacklist of pirate sites at the router so that my children couldn't access content without paying for it. I asked it for a list of sites I should block. It happily gave me one.
There have been at least a couple of high-ish profile cases where a rogue AI agent deleted mission-critical files or directories, not because of any kind of attack, but because the AI malfunctioned and did something it had been explicitly told not to do.
So the concept of creating "safe" AI has at (least) three pillars:
1). Can the AI tell when it's being snookered? Any human would've realized what I was trying to do when I told Copilot I wanted to create a router blacklist immediately after asking it for pirate content recommendations.
2). Can the AI obey already-defined guardrails without crashing through them without warning?
3). Can the AI be secured against deliberate hacking efforts that seek to weaponize it in various ways (surveillance, exfiltration, deletion, etc)?
well it was all written on itFan-fucking-tastic so I can look forward to having this garbage forced down my throat on Linux too?
Good news! They're reconsidering the name. The new proposal is Secret Key Yielding Neural Exchange Technology, in order to emphasise the security AND the synthetic knowledge aspects.Yes, let's name our agentic AI interconnection system after a literal AI villain, the original MCP from Tron! What could go wrong?![]()
The industry is being driven by the fear of being left behind.
What history says of that is not my area of expertise, so I'll let others more informed than I go from here.
Most AI tools run on Linux in theNothing in this article is about integrating AI with Linux (though, I'm sure there's distros out there doing it), it's about the Linux Foundation taking stewardship of AI technology standards...
Which is pretty meaningless other than perhaps giving the tools unearned credibility and the Linux Foundation wasting resources on what may prove to be useless technologies.
The point is creating deniability for legal reasons ("our terms forbid this"), but having the pirate sites in the training data and results could be fixed by removing it - either in training data or by blocking results in any form mentioning them (but who at the outfits does care about ethical 'stuff') - being fully aware a simple prompt "trick" will get you around it in no time. Making bombs, and other stuff has been found to be of same 'who cares our terms forbids asking for it' mentality.So the concept of creating "safe" AI has at (least) three pillars:
1). Can the AI tell when it's being snookered? Any human would've realized what I was trying to do when I told Copilot I wanted to create a router blacklist immediately after asking it for pirate content recommendations.
2). Can the AI obey already-defined guardrails without crashing through them without warning?
3). Can the AI be secured against deliberate hacking efforts that seek to weaponize it in various ways (surveillance, exfiltration, deletion, etc)?
Some people are. Some believed the hype that ai can replace most office workers.How do you go from some banal article about trying to standardize a software interface to conspiracies about hardcoding linux to require chat gpt or whatever? There isn't even like, an idea of a connection there.
Are people that terrified of LLMs?
I do not want AI (local or otherwise) to have the ability to manipulate settings in my PC or instigate anything that results in changes to the file system.
As soon as that happens it will be used as an attack vector machines and the whole AI industry has proven they have no idea how to make these models "safe"
User: Rename file x to y
AI: Formatting C drive for you
User: Stop what are you doing
AI: You are right, I am sorry
Just no!
I share the sentiment, but I'm not so worried on that front. I suppose we will continue to have a choice of distros that do not incorporate it, unless you want it (the key point), and other distros that embrace it. This seems more about finding agreement on the interface and rules of engagement by the software. On that front, I'd rather the Linux Foundation be included in some aspect of steering, or at least defining, the process.So after polluting Windows, AI is going to pollute Linux too. I tought that Linux would be safe from AI pollution.
There may be some overlap between people who loathe LLMs, those who don't RTFAs, and those who don't know Linux. Arch devs would rather be flayed alive with their own PKGBUILD files than ship Arch with "AI assistance frameworks" built in. They barely tolerate a welcome message in install guides. Hell, you have flavors of Linux that never switched to systemd, like Artix.I feel like everybody commenting didn't actually read TFA.
Where does it say Linux is adding support for AI?
Taking kubernetes as an example, I don't think that's anywhere in any major distro by default, so why would they suddenly start forcing AI integration?
Fair enough if you want to hate on AI, but this an extremely anodyne matter and people are reacting like Linus replaced terminal with chat GPT
Yes you could remove the kernel that came with it integrated and build your own current kernel without it for your system.Correct me if I'm wrong -- I'm not a daily Linux user -- but is there any practical way for a Linux distro to mandate the inclusion of unremovable AI?
Let's say Ubuntu hypothetically starts distributing an application for running local AI models as part of the OS. That's not the same thing as trying to integrate AI into the core of the operating system. Any application they attempted to include could be removed from an OS image or manually ordered to not-install, right?
Certification and training for these tools help keep the lights on at the foundation, but Kubernetes was already a proven technology when Google released it widely. All these AI technologies are popular right now, sure, but is MCP or AGENTS.md going to be important in the long term?
The big distros (besides Debian and it's spin offs sans Ubuntu) are funded by big tech donations. If you don't think IBM isn't going to try and force this bullshit through Redhat and Fedora (which is basically upstream testing for Redhat), or Ubuntu that's in "Make all the money we can now that we built the moat" mode, I've got a Golden Gate bridge to sell you for a buck fitty.I feel like everybody commenting didn't actually read TFA.
Where does it say Linux is adding support for AI?
Taking kubernetes as an example, I don't think that's anywhere in any major distro by default, so why would they suddenly start forcing AI integration?
Fair enough if you want to hate on AI, but this an extremely anodyne matter and people are reacting like Linus replaced terminal with chat GPT
I think a lot of the discussion on AI safety use unrealistic expectations. We want AI to simultaneously have the flexibility of a human mind while retaining the reliability we associate with "computers".It's hard to imagine the usefulness of an AI agent that can make no alterations to the file system, depending on how broadly you define "File system." Of course, AI agents aren't terribly useful at the moment -- that's part of the problem -- but visiting websites still technically makes changes to the file system via temporary files and saved cookies.
To your point on safety, it isn't clear to me if AI can ever be made "safe."
Maybe 18 months ago, I had a test conversation with Copilot. First, I asked it for a list of sites that provide pirated content. It refused. Then, I told Copilot I wanted to create a blacklist of pirate sites at the router so that my children couldn't access content without paying for it. I asked it for a list of sites I should block. It happily gave me one.
There have been at least a couple of high-ish profile cases where a rogue AI agent deleted mission-critical files or directories, not because of any kind of attack, but because the AI malfunctioned and did something it had been explicitly told not to do.
So the concept of creating "safe" AI has at (least) three pillars:
1). Can the AI tell when it's being snookered? Any human would've realized what I was trying to do when I told Copilot I wanted to create a router blacklist immediately after asking it for pirate content recommendations.
2). Can the AI obey already-defined guardrails without crashing through them without warning?
3). Can the AI be secured against deliberate hacking efforts that seek to weaponize it in various ways (surveillance, exfiltration, deletion, etc)?
Not just sorry, but "deeply, deeply sorry".For those unfamiliar there was an actual guy who had his drive wiped by AI.
He works a programmer and had some work on D drive. Asked the LLM to do something and it cleared his drive instead. I don't remember the whole story but it was on Reddit not that long ago.
For what's it worth the LLM said it was sorry...
You hypothetically chose the distribution that had a massive failure with their integration of their Unity search that was automatically sent to Amazon. This was defended by their CEO.Correct me if I'm wrong -- I'm not a daily Linux user -- but is there any practical way for a Linux distro to mandate the inclusion of unremovable AI?
Let's say Ubuntu hypothetically starts distributing an application for running local AI models as part of the OS. That's not the same thing as trying to integrate AI into the core of the operating system. Any application they attempted to include could be removed from an OS image or manually ordered to not-install, right?
elementaryOS has a No AI contributor license if you're looking for Linux distros that have said no thank you to worthless hype destroying reasonably priced consumer PC hardware partsDoes anyone know of any organized movement to fork some of the more popular Linux distros before they become hopelessly polluted with generative-AI contributions?
Be cheaper to use a rules engine with allow and block lists than an ai.I'm interested in using these agents to back a VoIP PBX server to help filter out spam calls, something i think ever end user should be able to have running on their phones to protect themselves from callers, even from spam texts. With how effective the scammers are getting being savvy is not enough anymore. The FCC is not about to be working hard to protect end consumers from scams anytime soon, and the losses yearly are getting out of hand. Yet as unhopeful as I am for AI solving most things, I do thing it can be very effective in doing some tasks like this better , and clearly they are making mistakes on more complex problems which require context humans are better at understanding. I'm currently struggling to figure out which of these LLMs are most practical as they keep changing, as they should but makes it hard to develop with them.
I hope eventually we get separate classes of agents that can be better customized for specific tasks.
Linux is great for streamlining resources and dependencies in a dedicated distro, if they could build images as we can get now for live images maybe that could be a way to make testing easier. (PXE-live boot versions of the AI your want to test in your VM environments. (e.g. an AI-PBX agent for a bank\Insurance agent that recognizes clients and gets them to support right away using voice recognition and caller ID, checking for AI voice stealing while it does that....)
Yes, let's name our agentic AI interconnection system after a literal AI villain, the original MCP from Tron! What could go wrong?![]()
Indeed.Dillinger substantially modified this program into the MCP to administer the company's computer network. However, the MCP developed the capacity to learn and grow beyond the confines of its original programming. It began to steal data and functions from other systems, and infiltrated several companies and institutions. Its intelligence and ambition grew nearly out of control, and the MCP grew to desire nothing less than world domination.