I mean, cool, but end user privacy is probably the lowest level of concern for many of us. The model you're using still has all the same ethical concerns re: training, the same environmental concerns, etc. And in many ways, a rollout/widespread adoption of privacy focused services like this may prevent us from getting the safety regulation that we want (eg, the logs of that guy that murdered his mom thanks to chatgpt's encouragement, the logs for which we likely would not have if this service was used).
In fairness that was nearly two years ago and the tech has become a lot better since, however I agree you should still not take the tokens outputted by an LLM at human valuewhen the LLM tells us that you should use glue to keep the cheese on a pizza.
Wait. I thought OpenAI declares solemnly and earnestly that ChatGPT is only for entertainment purposes and that it should not be relied upon for confidential health counsel?Sam Altman, CEO of OpenAI, has said such rulings mean even psychotherapy sessions on the platform may not stay private.
Two years ago it was still kind of cute when it made mistakes. Not so much now.In fairness that was nearly two years ago and the tech has become a lot better since, however I agree you should still not take the tokens outputted by an LLM at human value
These aren't "moral transgressions". We have documented cases of the LLMs coaching people to commit suicide, and at least one man to commit murder and then suicide. Or generating CSAM.Interests me that the first few responses seem to feel that encryption for LLMs might not be a good thing because it interferes with regulation. I disagree with that sentiment. I have long felt, and continue to feel, that a technical solution to moral transgressions is a fool's errand. Much better fought culturally by building consensus about what to do when these are encountered and lowering barriers to reporting than trying to put yet another invasive task on the shoulders of already drowning regulators/enforcers.
To be fair, the tokens output by humans can be pretty sus, as well.In fairness that was nearly two years ago and the tech has become a lot better since, however I agree you should still not take the tokens outputted by an LLM at human value
The NYT court case shows how important this is. Even if AI companies want to delete your information they legally cannot.In much the way Signal uses encryption to make messages readable only to parties participating in a conversation, Confer protects user prompts, AI responses, and all data included in them
Apple Private Cloud Compute seems to have many of the privacy features you describe Confer as having.There are other private LLMs, but none from the big players
Not meaning to minimize the damage by calling it a moral transgression. I was differentiating from rules that aren't inherently moral issues (like driving with expired registration for instance). My point was that relying on technical solutions (e.g., we'll just watch and track everything) doesn't have a good track record of actually stopping the harm. I think it's pretty clear that holding folks accountable with all of this out in the open is far from a slam dunk and I don't really believe that we would do that much worse holding people accountable if we encrypted things. The problem isn't access to data, it's a systemic issues that's better solved (in my opinion) by addressing norms and the broader culture.Two years ago it was still kind of cute when it made mistakes. Not so much now.
https://meincmagazine.com/ai/2026/01/...es-after-investigation-finds-dangerous-flaws/
These aren't "moral transgressions". We have documented cases of the LLMs coaching people to commit suicide, and at least one man to commit murder and then suicide. Or generating CSAM.
This is an unsafe product. With a system like this, the AI company can get off without any pesky liability for their product functioning in harmful or outright criminal ways.
By analogy, do you really think making non-metal guns that can avoid metal detectors are an ideal solution for protecting second amendment rights?Not meaning to minimize the damage by calling it a moral transgression. I was differentiating from rules that aren't inherently moral issues (like driving with expired registration for instance). My point was that relying on technical solutions (e.g., we'll just watch and track everything) doesn't have a good track record of actually stopping the harm. I think it's pretty clear that holding folks accountable with all of this out in the open is far from a slam dunk and I don't really believe that we would do that much worse holding people accountable if we encrypted things. The problem isn't access to data, it's a systemic issues that's better solved (in my opinion) by addressing norms and the broader culture.
By analogy, do we really think metal detectors are the ideal solution to school violence?
Coaching someone to commit suicide sounds like an obvious moral transgression. Are you trying to say it is more than that or not that?Two years ago it was still kind of cute when it made mistakes. Not so much now.
https://meincmagazine.com/ai/2026/01/...es-after-investigation-finds-dangerous-flaws/
These aren't "moral transgressions". We have documented cases of the LLMs coaching people to commit suicide, and at least one man to commit murder and then suicide. Or generating CSAM.
This is an unsafe product. With a system like this, the AI company can get off without any pesky liability for their product functioning in harmful or outright criminal ways.
More than that. Calling it a "moral transgression" (rather than just "a crime" or "a tort") is minimizing it. It's like saying a drug manufacturer not listing suicidal thoughts as a side-effect on a label is committing a "moral transgression."Coaching someone to commit suicide sounds like an obvious moral transgression. Are you trying to say it is more than that or not that?
??? Bytes has nothing to do with powers of 2 until you scale it in kibi and gibi bytes as in the memory industry due to cpu/uc addressing. Bytes are also commonly scaled in metric for linear storage.The key length in byte had me puzzled for an instant. Isn't expressing it in bits more standard, since not all key lengths fall neatly on powers of two?
It's not "lowering barriers to reporting" when you put the data where it is not subject to investigation.Interests me that the first few responses seem to feel that encryption for LLMs might not be a good thing because it interferes with regulation. I disagree with that sentiment. I have long felt, and continue to feel, that a technical solution to moral transgressions is a fool's errand. Much better fought culturally by building consensus about what to do when these are encountered and lowering barriers to reporting than trying to put yet another invasive task on the shoulders of already drowning regulators/enforcers.
I'm definitely mixed on this. On the one hand I think things like Signal are critical. People need to be able to have fully private conversations, and people are the ones who benefit from secure messaging.Interests me that the first few responses seem to feel that encryption for LLMs might not be a good thing because it interferes with regulation. I disagree with that sentiment. I have long felt, and continue to feel, that a technical solution to moral transgressions is a fool's errand. Much better fought culturally by building consensus about what to do when these are encountered and lowering barriers to reporting than trying to put yet another invasive task on the shoulders of already drowning regulators/enforcers.
This is, frankly, kind of terrifying. It's going to be hard to convince me that people are having "life-changing conversations" with LLMs by sharing their most private details is good thing. Many people are already using LLMs as therapists and having a sycophantic, hallucinatory, corporate "therapist" that's trained on 99% internet garbage is a bad, bad, idea. Which, I suppose, is why people keep killing themselves or experiencing AI psychosis. I'm sure that some people are getting some benefit from it but if a human therapist induced just a single patient to kill themselves one time they'd be in jail forever, not getting billions of dollars of investment money.“It’s been really interesting and encouraging and amazing to hear stories from people who have used Confer and had life-changing conversations, in part because they haven’t felt free to include information in those conversations with sources like ChatGPT or they had insights using data that they weren’t really free to share with ChatGPT before but can using an environment like Confer.”
In theory, you'd have a commodity LLM provider that would be running a model created by other people on their cloud servers.I'm confused. Is this using their own LLM or connecting me with third party models?
The E2EE IS the differentiating factor. It protects the inference process. What ther LLM does that?LLM have many externalities, chefly among them are the environnemental/ecological ones (training is ressource intensive), moral/ethical ones (stealing from artists for the training part) and privacy ones. This project address only the latter and left the other ones untouched. Privacy should be by default, not a nice to have optionnal feature, so I'm not wowed by the E2E here. So I'm not sold on the project of a Nth LLM assistant which doesn't have much of a differenciating factor.
Oh, that's a relief. I was afraid people were naming their offspring after fizzy drinks.Dan Goodin said:Moxie Marlinspike—the pseudonym of an engineer who set a new standard for private messaging with the creation of the Signal Messenger—is now aiming to revolutionize AI chatbots in a similar way.
No, it's not good. LLMs should not arbitrarily censored. These days, it's already hard to ask GPT "how to download torrents?" without being served half a page of moralizing bs before getting to the technical part.As soon as you test it / try to push it out of bounds, you repeatedly get a "sorry, can't help with that," which is good.