Perplexity says AI access to your files is in "secure environment with clear safeguards."
See full article...
See full article...
The same kinds of guardrails that prevent LLMs from encouraging people to commit suicide and prevent them from giving advice on mass shooting events and things like that?Perplexity says AI access to your files is in “secure environment with clear safeguards.”
The most secure thing you can do is say, "Oh, HELL no!" and back away slowly, keeping it covered the entire way.If the prospect of letting “a persistent, digital proxy of you” loose on your private local files is worrying to you, you’re not alone. Perplexity promises that Personal Computer operates in a “secure environment with clear safeguards,” that all “sensitive actions” require user approval, that it keeps a “full audit trail” of every session, and that the system has a “kill switch” to stop things from getting out of hand in the most extreme cases.
"Okay. Deleting user files".Safeguards being a line in the system prompt to say, “Pretty please with sugar on top, don’t delete the users files”
But Personal Computer, running on a Mac Mini, also gives Perplexity’s agents local access to your files and apps, which it can open and manipulate directly in order to attempt to complete those tasks.
One prominent characteristic of tech these days seems to be that it's always brought to market years before it's ready and in the case of LLMs, there are consequencesAgentic AI tools are not ready for widespread use in production environments. It might be a neat toy in a sandbox.
Using these tools is going to be the ruin of anyone using them in their current state, the negative consequences are very foreseeable and should be widely known. When a chatbot makes a mistake summarizing my email this is usually low stakes. Allowing agents to run all over my system will have much higher stakes that I am not willing to bet my career or my personal files on.
I dont trust LLM's to properly, consistently, and without error retrieve information from my emails. Why on earth would we allow agents with the same or worse level of functioning run free in our systems? There have been major failures in other agent deployments deleting databases when memory overflow forgot the guardrails to ask the human to proceed.
Not any specific thing you find interesting about whales. Not any key points that the educational guide is supposed to convey, or to what sort of audience. Just, like, barf up some slop. My boss asked me for a podcast. Make me a podcast. Human almost completely out of the loop.an introductory video shows Personal Computer’s questions in a sidebar asking things like, “Create an interactive educational guide” and “create a podcast about whales.”
Then to top it offAnyone else have loud noises and flashing lights appear in their head as they read this?
Perplexity users [ and possibly anyone else? ] can also log in remotely to their local copy of Personal Computer, making it “controllable from any device, anywhere,”
You aren't developing software if you're vibe coding. You're creating a mess that appears to do what you want in the best of cases.I have been able to build and create things I thought were out of reach. I have caught up on my to-do list for existing projects. Software development is fun again. I read these comments whenever i feel like i am falling behind, seeing the absolute cope and low agency of the commenters.
I have been able to build and create things I thought were out of reach. I have caught up on my to-do list for existing projects. Software development is fun again. I read these comments whenever i feel like i am falling behind, seeing the absolute cope and low agency of the commenters.
I would put it a little differently--it feels like in another era, there could be a baseline assumption that a product was broadly fit for purpose. Caveat emptor, of course. It might be buggy, it might be defective. But you could approach the product description knowing some kind of effort was made. Now nobody cares, it really doesn't matter if it almost always works or just mostly works or if it will absolutely tank critical systems. In all three cases, just slap an identical "btw you can't trust this tool" statement in the EULA and YOLO that shit out the door.One prominent characteristic of tech these days seems to be that it's always brought to market years before it's ready and in the case of LLMs, there are consequences
I read an article where a programmer who uses AI a lot included in his instructions "Faulty code would be embarrassing "Safeguards being a line in the system prompt to say, “Pretty please with sugar on top, don’t delete the users files”
Linux sysadmin now doing big data type stuff at one of the largest financial institutions in the world here. In the next few weeks I need to provide training to some other members of the team on how to edit some configuration files for a monitoring tool used to check the status of hundreds of application websites. Editing these files requires root level access. None of these team members have Linux experience, but they are very competent at what they currently do. I am more than a little anxious about them having root level access to these two servers because the servers are critical for far more than this one monitoring tool. I trust my colleagues' judgement, but given that root gives complete access and control to everything on a Linux box, I still worry about my colleagues getting this access, even though I'm certain they will never do anything else other than edit the configuration files.I have been able to build and create things I thought were out of reach. I have caught up on my to-do list for existing projects. Software development is fun again. I read these comments whenever i feel like i am falling behind, seeing the absolute cope and low agency of the commenters.
You're absolutely right! Let me give you a breakdown of what will be deleted instead:Safeguards being a line in the system prompt to say, “Pretty please with sugar on top, don’t delete the users files”
and someone in a 'faraway' land can't do that to you because you do it more slowly? You (maybe) use frameworks and high level programming languages, and that's good. But where did it leave the guy who shunned those advances and remained at the same speed and skill level as when he graduated?
AI is a tool, and it is possible to learn how to use it properly.
Give them permission to submit pull requests to your configuration management repo.Linux sysadmin now doing big data type stuff at one of the largest financial institutions in the world here. In the next few weeks I need to provide training to some other members of the team on how to edit some configuration files for a monitoring tool used to check the status of hundreds of application websites. Editing these files requires root level access. None of these team members have Linux experience, but they are very competent at what they currently do. I am more than a little anxious about them having root level access to these two servers because the servers are critical for far more than this one monitoring tool. I trust my colleagues' judgement, but given that root gives complete access and control to everything on a Linux box, I still worry about my colleagues getting this access, even though I'm certain they will never do anything else other than edit the configuration files.
Unlike with AI, I'm certain my colleagues won't hallucinate proper steps and know what their limits in knowledge and skills are. AI is a probalistic system that can generate an undeterminable range of outputs given the same input. Although AI might provide the desired output most of the time, there's absolutely no way to guarantee that it won't at some point do "rm -rf /" on a Linux box.
I use AI on my job, but I don't let it have any agency on files. The only way I would do that is from within a VM or a container. I know developers who us AI and have increased their productivity by multiple factors. They won't allow something like Personal Computer loose in their environments. If you think that giving Personal Computer free rein on your stuff is a good idea, I seriously question your judgement as a developer.
Highly experienced developer here, thanks.You can call it what you want. I am not vague prompting, there is a detailed plan and step by step creation. The code is reviewed, thoroughly tested at each step. Highly experienced developers are using it, including at 37 Signals. Wake up.
The most secure thing you can do is say, "Oh, HELL no!" and back away slowly, keeping it covered the entire way.
The only other option would be beating it to death with a metal bat, but that's harder on the data, even if it does prevent it from being sent outside of your system.
LOL - one of the first questions I asked when starting this job is where's the version control? It's there for some stuff, but not everything that needs it. I have no idea why. Everytime I have to make configuration changes I ask myself why the everliving hell am I doing direct edits on a production server? I'm going to stop now before I launch into a seriously long rant on the stupidity of this situation.Give them permission to submit pull requests to your configuration management repo.
While it may indeed be your experience using LLMs that they are a hindrance more than a help, that's certainly not true for everyone. I can think of many examples where AI (and yes, even LLMs) are helping with undeniably non-trivial real problems.Highly experienced developer here, thanks.
Pass. LLMs are not AI. You can use them to help some trivial tasks but I've found them to be a hindrance more than anything else when tackling real problems every time.
Sure - but they are slathering LLMs over everything including, as this article describes, directly manipulating your stored data. In this particular case you'd be very prudent to be wildly skeptical.Anyway, generally speaking, it seems to me the smart bet is to familiarize oneself with the types of problems and situations for which LLMs are useful, rather than continuing to complain about those for which they are not.
Notice they didn't even specify what kind of whale.Not any specific thing you find interesting about whales. Not any key points that the educational guide is supposed to convey, or to what sort of audience. Just, like, barf up some slop. My boss asked me for a podcast. Make me a podcast. Human almost completely out of the loop.
It's bleak stuff. And this is their own advertising material, not the oppo research!
Just fyi, usually when making an argument for why/how something helps, especially at 3 paragraphs long, it's a good idea to list at least a single example of such.While it may indeed be your experience using LLMs that they are a hindrance more than a help, that's certainly not true for everyone. I can think of many examples where AI (and yes, even LLMs) are helping with undeniably non-trivial real problems.
Ultimately, whether anyone likes it or not, AI and LLM use will become normalized (yes, even over the objections of highly experienced developers). The situation reminds me quite a bit of the initial resistance to GUIs in favor of CLIs in the 80s. "They require more memory!", "They're slower!", "They're less precise!"... sure - but they took over anyway because of their usability for non-programmers.
Anyway, generally speaking, it seems to me the smart bet is to familiarize oneself with the types of problems and situations for which LLMs are useful, rather than continuing to complain about those for which they are not. (Having written that, I still have no interest in letting agents near my personal stuff as described in this article.)