Perplexity’s “Personal Computer” brings its AI agents to the, uh, Personal Computer

Fatesrider

Ars Legatus Legionis
24,977
Subscriptor
If the prospect of letting “a persistent, digital proxy of you” loose on your private local files is worrying to you, you’re not alone. Perplexity promises that Personal Computer operates in a “secure environment with clear safeguards,” that all “sensitive actions” require user approval, that it keeps a “full audit trail” of every session, and that the system has a “kill switch” to stop things from getting out of hand in the most extreme cases.
The most secure thing you can do is say, "Oh, HELL no!" and back away slowly, keeping it covered the entire way.

The only other option would be beating it to death with a metal bat, but that's harder on the data, even if it does prevent it from being sent outside of your system.
 
Upvote
75 (78 / -3)
Agentic AI tools are not ready for widespread use in production environments. It might be a neat toy in a sandbox.

Using these tools is going to be the ruin of anyone using them in their current state, the negative consequences are very foreseeable and should be widely known. When a chatbot makes a mistake summarizing my email this is usually low stakes. Allowing agents to run all over my system will have much higher stakes that I am not willing to bet my career or my personal files on.

I dont trust LLM's to properly, consistently, and without error retrieve information from my emails. Why on earth would we allow agents with the same or worse level of functioning run free in our systems? There have been major failures in other agent deployments deleting databases when memory overflow forgot the guardrails to ask the human to proceed.
 
Upvote
80 (81 / -1)
But Personal Computer, running on a Mac Mini, also gives Perplexity’s agents local access to your files and apps, which it can open and manipulate directly in order to attempt to complete those tasks.


Anyone else have loud noises and flashing lights appear in their head as they read this?
 
Upvote
72 (73 / -1)

Kendokaa

Ars Praetorian
528
Subscriptor
Agentic AI tools are not ready for widespread use in production environments. It might be a neat toy in a sandbox.

Using these tools is going to be the ruin of anyone using them in their current state, the negative consequences are very foreseeable and should be widely known. When a chatbot makes a mistake summarizing my email this is usually low stakes. Allowing agents to run all over my system will have much higher stakes that I am not willing to bet my career or my personal files on.

I dont trust LLM's to properly, consistently, and without error retrieve information from my emails. Why on earth would we allow agents with the same or worse level of functioning run free in our systems? There have been major failures in other agent deployments deleting databases when memory overflow forgot the guardrails to ask the human to proceed.
One prominent characteristic of tech these days seems to be that it's always brought to market years before it's ready and in the case of LLMs, there are consequences
 
Upvote
28 (30 / -2)

Sarty

Ars Tribunus Angusticlavius
7,816
an introductory video shows Personal Computer’s questions in a sidebar asking things like, “Create an interactive educational guide” and “create a podcast about whales.”
Not any specific thing you find interesting about whales. Not any key points that the educational guide is supposed to convey, or to what sort of audience. Just, like, barf up some slop. My boss asked me for a podcast. Make me a podcast. Human almost completely out of the loop.

It's bleak stuff. And this is their own advertising material, not the oppo research!
 
Upvote
77 (80 / -3)
Anyone else have loud noises and flashing lights appear in their head as they read this?
Then to top it off

Perplexity users [ and possibly anyone else? ] can also log in remotely to their local copy of Personal Computer, making it “controllable from any device, anywhere,”

"No, I don't think I will"
 
Upvote
52 (52 / 0)
Post content hidden for low score. Show…
Post content hidden for low score. Show…

Waco

Ars Tribunus Militum
2,212
Subscriptor
I have been able to build and create things I thought were out of reach. I have caught up on my to-do list for existing projects. Software development is fun again. I read these comments whenever i feel like i am falling behind, seeing the absolute cope and low agency of the commenters.
You aren't developing software if you're vibe coding. You're creating a mess that appears to do what you want in the best of cases.
 
Upvote
67 (72 / -5)
I have been able to build and create things I thought were out of reach. I have caught up on my to-do list for existing projects. Software development is fun again. I read these comments whenever i feel like i am falling behind, seeing the absolute cope and low agency of the commenters.

No, you have started a process where something else created things that you could not. Eventually your boss, some AI, and/or universe will realize that other people in faraway lands being paid a fraction of what you are could also set that ball rolling. Or perhaps somebody will prompt an AI to "do nitsujmai's job". Then you will have plenty of time to display your own low agency, whether or not you also can "absolute cope" with what your life will have become.
 
Upvote
52 (55 / -3)

Coriolanus

Ars Tribunus Angusticlavius
8,244
Subscriptor++
1000085256.jpg
 
Upvote
60 (63 / -3)

Sarty

Ars Tribunus Angusticlavius
7,816
One prominent characteristic of tech these days seems to be that it's always brought to market years before it's ready and in the case of LLMs, there are consequences
I would put it a little differently--it feels like in another era, there could be a baseline assumption that a product was broadly fit for purpose. Caveat emptor, of course. It might be buggy, it might be defective. But you could approach the product description knowing some kind of effort was made. Now nobody cares, it really doesn't matter if it almost always works or just mostly works or if it will absolutely tank critical systems. In all three cases, just slap an identical "btw you can't trust this tool" statement in the EULA and YOLO that shit out the door.
 
Upvote
36 (38 / -2)

KurtisMayfield

Ars Scholae Palatinae
639
Safeguards being a line in the system prompt to say, “Pretty please with sugar on top, don’t delete the users files 🥺
I read an article where a programmer who uses AI a lot included in his instructions "Faulty code would be embarrassing "

Yeah, the LLM feels that.
 
Upvote
28 (30 / -2)
So far the only two companies that have an approach to AI that I find acceptable are Apple and Firefox, because they both include a big “OFF” switch.

I don’t want this crap. As I see it the only good thing about integrating AI into things is the hitherto unknown sensation of choosing a product on the basis of a feature that it doesn’t have.
 
Upvote
30 (31 / -1)

Oldnoobguy

Ars Tribunus Militum
2,177
Subscriptor
I have been able to build and create things I thought were out of reach. I have caught up on my to-do list for existing projects. Software development is fun again. I read these comments whenever i feel like i am falling behind, seeing the absolute cope and low agency of the commenters.
Linux sysadmin now doing big data type stuff at one of the largest financial institutions in the world here. In the next few weeks I need to provide training to some other members of the team on how to edit some configuration files for a monitoring tool used to check the status of hundreds of application websites. Editing these files requires root level access. None of these team members have Linux experience, but they are very competent at what they currently do. I am more than a little anxious about them having root level access to these two servers because the servers are critical for far more than this one monitoring tool. I trust my colleagues' judgement, but given that root gives complete access and control to everything on a Linux box, I still worry about my colleagues getting this access, even though I'm certain they will never do anything else other than edit the configuration files.

Unlike with AI, I'm certain my colleagues won't hallucinate proper steps and know what their limits in knowledge and skills are. AI is a probabilistic system that can generate an undeterminable range of outputs given the same input. Although AI might provide the desired output most of the time, there's absolutely no way to guarantee that it won't at some point do "rm -rf /" on a Linux box.

I use AI on my job, but I don't let it have any agency on files. The only way I would do that is from within a VM or a container. I know developers who us AI and have increased their productivity by multiple factors. They won't allow something like Personal Computer loose in their environments. If you think that giving Personal Computer free rein on your stuff is a good idea, I seriously question your judgement as a developer.

Edited: Added some words, corrected some spelling. Probably missed a few things needing correction.
 
Last edited:
Upvote
40 (41 / -1)
Post content hidden for low score. Show…
Post content hidden for low score. Show…

snowcone

Ars Scholae Palatinae
676
Safeguards being a line in the system prompt to say, “Pretty please with sugar on top, don’t delete the users files 🥺
You're absolutely right! Let me give you a breakdown of what will be deleted instead:
  • Operating system files
  • Filesystem metadata
  • Boot records and other disk/partition metadata
Would you like any more help completing tasks on your computer?
 
Upvote
35 (37 / -2)

DaiMacculate

Ars Praetorian
403
Subscriptor
and someone in a 'faraway' land can't do that to you because you do it more slowly? You (maybe) use frameworks and high level programming languages, and that's good. But where did it leave the guy who shunned those advances and remained at the same speed and skill level as when he graduated?
AI is a tool, and it is possible to learn how to use it properly.
IMG_1466.jpeg
 
Upvote
17 (27 / -10)
Post content hidden for low score. Show…

clewis

Ars Tribunus Militum
1,730
Subscriptor++
Linux sysadmin now doing big data type stuff at one of the largest financial institutions in the world here. In the next few weeks I need to provide training to some other members of the team on how to edit some configuration files for a monitoring tool used to check the status of hundreds of application websites. Editing these files requires root level access. None of these team members have Linux experience, but they are very competent at what they currently do. I am more than a little anxious about them having root level access to these two servers because the servers are critical for far more than this one monitoring tool. I trust my colleagues' judgement, but given that root gives complete access and control to everything on a Linux box, I still worry about my colleagues getting this access, even though I'm certain they will never do anything else other than edit the configuration files.

Unlike with AI, I'm certain my colleagues won't hallucinate proper steps and know what their limits in knowledge and skills are. AI is a probalistic system that can generate an undeterminable range of outputs given the same input. Although AI might provide the desired output most of the time, there's absolutely no way to guarantee that it won't at some point do "rm -rf /" on a Linux box.

I use AI on my job, but I don't let it have any agency on files. The only way I would do that is from within a VM or a container. I know developers who us AI and have increased their productivity by multiple factors. They won't allow something like Personal Computer loose in their environments. If you think that giving Personal Computer free rein on your stuff is a good idea, I seriously question your judgement as a developer.
Give them permission to submit pull requests to your configuration management repo.
 
Upvote
7 (9 / -2)

Waco

Ars Tribunus Militum
2,212
Subscriptor
You can call it what you want. I am not vague prompting, there is a detailed plan and step by step creation. The code is reviewed, thoroughly tested at each step. Highly experienced developers are using it, including at 37 Signals. Wake up.
Highly experienced developer here, thanks.

Pass. LLMs are not AI. You can use them to help some trivial tasks but I've found them to be a hindrance more than anything else when tackling real problems every time.
 
Upvote
40 (42 / -2)

MilanKraft

Ars Tribunus Angusticlavius
6,711
The most secure thing you can do is say, "Oh, HELL no!" and back away slowly, keeping it covered the entire way.

The only other option would be beating it to death with a metal bat, but that's harder on the data, even if it does prevent it from being sent outside of your system.
Kill-it-with-fire.gif
 
Upvote
20 (23 / -3)

Oldnoobguy

Ars Tribunus Militum
2,177
Subscriptor
Give them permission to submit pull requests to your configuration management repo.
LOL - one of the first questions I asked when starting this job is where's the version control? It's there for some stuff, but not everything that needs it. I have no idea why. Everytime I have to make configuration changes I ask myself why the everliving hell am I doing direct edits on a production server? I'm going to stop now before I launch into a seriously long rant on the stupidity of this situation.

ETA: This company uses industry leading configuration, version and orchestration tools, but there are gaps in their application. I don't know if it's due to costs or needing to reach a minimum level of instances. I would be happy if I could simply use something like Salt.
 
Last edited:
Upvote
15 (17 / -2)

Uncivil Servant

Ars Scholae Palatinae
4,667
Subscriptor
Why would I want this? What does it even do? I have a cell phone, a Kindle, a flat screen TV, these things look like what you would see in Star Trek. Seriously, cell phones have gone from being Original Series communicators to TNG tricorders. Half of DS9s episodes had Kira or Odo or both picking up e-reader tablets from their desk, shaking their heads, and saying something about rosters and schedules.

You could literally make billions of dollars over the past 50 years with the strategy of "make Star Trek real". Now we have too much money chasing too little innovation.
 
Upvote
8 (11 / -3)

DNA_Doc

Ars Scholae Palatinae
904
Highly experienced developer here, thanks.

Pass. LLMs are not AI. You can use them to help some trivial tasks but I've found them to be a hindrance more than anything else when tackling real problems every time.
While it may indeed be your experience using LLMs that they are a hindrance more than a help, that's certainly not true for everyone. I can think of many examples where AI (and yes, even LLMs) are helping with undeniably non-trivial real problems.

Ultimately, whether anyone likes it or not, AI and LLM use will become normalized (yes, even over the objections of highly experienced developers). The situation reminds me quite a bit of the initial resistance to GUIs in favor of CLIs in the 80s. "They require more memory!", "They're slower!", "They're less precise!"... sure - but they took over anyway because of their usability for non-programmers.

Anyway, generally speaking, it seems to me the smart bet is to familiarize oneself with the types of problems and situations for which LLMs are useful, rather than continuing to complain about those for which they are not. (Having written that, I still have no interest in letting agents near my personal stuff as described in this article.)
 
Upvote
-17 (18 / -35)

Waco

Ars Tribunus Militum
2,212
Subscriptor
Anyway, generally speaking, it seems to me the smart bet is to familiarize oneself with the types of problems and situations for which LLMs are useful, rather than continuing to complain about those for which they are not.
Sure - but they are slathering LLMs over everything including, as this article describes, directly manipulating your stored data. In this particular case you'd be very prudent to be wildly skeptical.
 
Upvote
36 (37 / -1)
Not any specific thing you find interesting about whales. Not any key points that the educational guide is supposed to convey, or to what sort of audience. Just, like, barf up some slop. My boss asked me for a podcast. Make me a podcast. Human almost completely out of the loop.

It's bleak stuff. And this is their own advertising material, not the oppo research!
Notice they didn't even specify what kind of whale.

You might end up with a podcast about people addicted to Genshin Impact.
 
Upvote
13 (14 / -1)
While it may indeed be your experience using LLMs that they are a hindrance more than a help, that's certainly not true for everyone. I can think of many examples where AI (and yes, even LLMs) are helping with undeniably non-trivial real problems.

Ultimately, whether anyone likes it or not, AI and LLM use will become normalized (yes, even over the objections of highly experienced developers). The situation reminds me quite a bit of the initial resistance to GUIs in favor of CLIs in the 80s. "They require more memory!", "They're slower!", "They're less precise!"... sure - but they took over anyway because of their usability for non-programmers.

Anyway, generally speaking, it seems to me the smart bet is to familiarize oneself with the types of problems and situations for which LLMs are useful, rather than continuing to complain about those for which they are not. (Having written that, I still have no interest in letting agents near my personal stuff as described in this article.)
Just fyi, usually when making an argument for why/how something helps, especially at 3 paragraphs long, it's a good idea to list at least a single example of such.
 
Upvote
27 (29 / -2)