I love these companies constantly admitting don't actually know how their LLMs work, because they cannot stop it from doing stupid things. "It's trained not to do this but it might anyways! Good luck everyone!"Anthropic also notes on a support page that the model is trained to avoid “risky operations” such as moving or investing money, modifying files, scraping facial images, or inputting “sensitive data.” But the company also warns that such training safeguards “aren’t perfect” and “aren’t absolute,” meaning that “Claude may occasionally act outside these boundaries.”
Unless you are Ted-Kaczynski-level removed from regular society, off the grid in a shack way out in the woods, someone somewhere is soon going to let the slopbot operate on data that is important to you. You may find the consequences of this careless usage to be... unpleasant.as long as its an optional feature, IDGAF.
Is it me, or is quoting those two apps as the important "off limits" categories the most rich-white-guy thing?Anthropic says it has safeguards in place to prevent common risks like prompt injection, and it will limit access to certain “off limits” apps (e.g., “investment and trading platforms, cryptocurrency”) by default.
No worries, you don't need to give your SSN and CC numbers to Claude. It already has them by courtesy of DOGEI’m all in. How do I give it my social and credit card numbers?![]()
You know why. It begins with “L” and ends with “Aziness”.My cybersecurity past is screaming at this like... no - why - why would you ever?!
My first level IT got outsourced to India, so things that used to taker minutes now take days. So those tickets you're talking about will be very interesting to see being resolved.We have people opening tickets to install Python on their Macs because AI told them to do so in order to complete a simple task. They aren’t technical enough to know why that’s unnecessary when their computer has tools that can already do what they want (like Excel, Numbers, or Google Sheets).
I can’t wait for all the tickets that are going to come about from random incompatibilities and collisions between actions when people turn over full control of their computers.
Yah, it “saved you time” a few times, and then it cost you all your work once.
Claude Code can now take over your computer
Never fear. There will always be a worse idea. it is a corollary of stupidity and about as limited.And here I thought Microsoft Recall was the worst idea.
TBF, I bailed on Windows because it seemed to me that Microsoft was turning itself into Google, where your shit isn't yours but theirs and all your data belonged to them.I mean, Microsoft already took over my windows installation with unwanted AI slop. It's why I converted to a Linux installation.
I'm surprised Microsoft isn't suing Anthropic and all these other agentic AI companies for attempting to bypass Copilot on W11.
Either way, I'm not interested in giving any control to an automated theft machine to steal my data and poorly impersonate me on my own device.
No, you don't. Not figuratively at all.I can't wait until my coworker who is in love with Claude* runs it and it shits his bed.
* only figuratively (...I hope)
You've sortof nailed something on the head here;Is it me, or is quoting those two apps as the important "off limits" categories the most rich-white-guy thing?
What about private browsing windows where people might be looking up abortion or LGBTQ+ info? What about chat/social apps? Even Recall had more sensible defaults and could have apps opted into not being recordable.
Laziness confirmed.The top use case I can see (and have used) is setting up kiosk systems for temporary installs of non-networked (when deployed, isolated when being configured), non-sensitive exhibits. I would never trust it with free reign of my personal computer, but helping me get the environment set up for something I'm throwing away after a week saves me a lot of time with minimal risk. Step 1 is always determining the risk of your activity and the threshold of risk you're willing to accept for a given task.
I am able to imagine some pretty bad things, like this AI running on a computer deep in the Cheyenne Mountain Complex and hitting the wrong "button" during a simulated attack ...I think we do, because I don't think the worst thing you can imagine is the actual worst thing that could happen. Sadly.
This motto seems to have changed in the last few years to “a computer can never be held accountable, but neither seemingly can anyone in management be held accountable so whats the difference?”Don't make me tap the sign.
![]()
That's an incorrect assumption that it's my job. These projects are for the community as a volunteer.Laziness confirmed.
do your job.
A product that only deletes all of your data 1% of the time is not safe, and stupid to use, but usually won't delete all your data. It will generate quite a few anecdotes of success. Basing your judgment on those anecdotes is going to get you in trouble.I gave claude-code root without password on my Linux desktop months ago so it could quickly address some issues I was having. It solved my problem and there was no collateral damage. I wouldn't put it in charge of anything other people depend on, but I think it's good enough to treat like an entry-level engineer.
Fully functional? So 2022 of you.I love these companies constantly admitting don't actually know how their LLMs work, because they cannot stop it from doing stupid things. "It's trained not to do this but it might anyways! Good luck everyone!"
Much has also been written about users become beta testers more often. It's one thing to get a free video game demo to help report bugs, but if I'm paying a company up to $100-200/mo (Claude Max), I expect to get FULLY FUNCTIONAL software/hardware.
We really are going to end up living in Idiocracy.