Signal creator Moxie Marlinspike wants to do for AI what he did for messaging

Kurenai

Ars Scholae Palatinae
626
Subscriptor
I mean, cool, but end user privacy is probably the lowest level of concern for many of us. The model you're using still has all the same ethical concerns re: training, the same environmental concerns, etc. And in many ways, a rollout/widespread adoption of privacy focused services like this may prevent us from getting the safety regulation that we want (eg, the logs of that guy that murdered his mom thanks to chatgpt's encouragement, the logs for which we likely would not have if this service was used).
 
Upvote
139 (195 / -56)

MilanKraft

Ars Tribunus Angusticlavius
6,806
While not a fan of LLMs and the Fancy Fluency Machine market mania, if one finds themself in a place where the few legitimate use cases apply (e.g. brainstorming or outlining a large writing project), and you've already got a Proton account for VPN / secure email, can confirm Lumo is good on the privacy front.

It's hard to tell exactly what is under the hood, but in theory Proton has taken well-known open-source engine(s) — different ones for different purposes — and modified them to their standards. Seems like they've been more selective on what Lumo trained on (likely no social media based on the answers I get). Also hard to be sure where the datacenters are and therefore what type of power they're drawing (nuclear / hydro / solar / wind vs. fossil), but hopefully that becomes more transparent at some point. In my experience Proton seems like a pretty ethical company.

The guardrails have a more Euro-vibe (understandably). As LLMs go, it will treat you like an adult if you ask it questions like an adult, but everything within limits as they say. As soon as you test it / try to push it out of bounds, you repeatedly get a "sorry, can't help with that," which is good. Lately (v1.2 and later I think) the UI took on a GPT-like appearance which scares me a little, but is what it is. Obviously Proton doesn't have the resources to roll its own LLM from scratch. At least I don't think they do.

Anyway, it's a reasonably priced add-on to the regular Proton suite if you have a legitimate need for it.
 
Upvote
58 (60 / -2)

JohnDeL

Ars Tribunus Angusticlavius
8,745
Subscriptor
I mean, cool, but end user privacy is probably the lowest level of concern for many of us. The model you're using still has all the same ethical concerns re: training, the same environmental concerns, etc. And in many ways, a rollout/widespread adoption of privacy focused services like this may prevent us from getting the safety regulation that we want (eg, the logs of that guy that murdered his mom thanks to chatgpt's encouragement, the logs for which we likely would not have if this service was used).

There's one other, fundamental problem with LLMs - they lie. (OK, 'hallucinate' but the difference is immaterial from the end-user's POV.) So all this service does is give us privacy when the LLM tells us that you should use glue to keep the cheese on a pizza.
 
Upvote
81 (110 / -29)

Purpleivan

Ars Praetorian
431
Subscriptor++
Where messages are concerned, the service provided is simply passing on information from sender to receiver, so I can see the privacy concerns that securing that type of service addresses, being generally a positive thing.

In the case of generative AI, the service provided is not a conduit of information, but the generation of it by the service provider. Given the recent issues regarding what Grok can and does provide to users, hiding that from all eyes, including possibly regulators, could be a lot less positive.

Personally I'm on the fence about this, but it seems clear to me that with generative AI, that private does not always equal good.
 
Upvote
66 (79 / -13)

KenM

Ars Centurion
287
Subscriptor++
I'm more concerned about the ethics of LLMs/GenAI, from the sources of training data, to the possibility for abuse (graphically and otherwise) and the general inability to determine how the model came up with the answer provided.

As long as questions like that are outstanding, and hallucinations are common, I'm not likely to use LLMs/GenAI for much, if anything.

(edited to be more specific around terminology)
 
Upvote
62 (64 / -2)

uploaded

Smack-Fu Master, in training
57
Sounds pretty good but... What model are they using? If they had the resources to train their own competitive model they'd surely say so, so they must be using some open source model. It would be nice to know which, or (as a user) to have a choice.
Also: pricing? I couldn't find a pricing page, and their homepage wouldn't let me read an "about us" without signing up with my email!
 
Upvote
42 (42 / 0)
Interests me that the first few responses seem to feel that encryption for LLMs might not be a good thing because it interferes with regulation. I disagree with that sentiment. I have long felt, and continue to feel, that a technical solution to moral transgressions is a fool's errand. Much better fought culturally by building consensus about what to do when these are encountered and lowering barriers to reporting than trying to put yet another invasive task on the shoulders of already drowning regulators/enforcers.
 
Upvote
18 (32 / -14)

graylshaped

Ars Legatus Legionis
67,891
Subscriptor++
Sam Altman, CEO of OpenAI, has said such rulings mean even psychotherapy sessions on the platform may not stay private.
Wait. I thought OpenAI declares solemnly and earnestly that ChatGPT is only for entertainment purposes and that it should not be relied upon for confidential health counsel?
 
Upvote
59 (61 / -2)

hillspuck

Ars Scholae Palatinae
2,179
In fairness that was nearly two years ago and the tech has become a lot better since, however I agree you should still not take the tokens outputted by an LLM at human value
Two years ago it was still kind of cute when it made mistakes. Not so much now.
https://meincmagazine.com/ai/2026/01/...es-after-investigation-finds-dangerous-flaws/

Interests me that the first few responses seem to feel that encryption for LLMs might not be a good thing because it interferes with regulation. I disagree with that sentiment. I have long felt, and continue to feel, that a technical solution to moral transgressions is a fool's errand. Much better fought culturally by building consensus about what to do when these are encountered and lowering barriers to reporting than trying to put yet another invasive task on the shoulders of already drowning regulators/enforcers.
These aren't "moral transgressions". We have documented cases of the LLMs coaching people to commit suicide, and at least one man to commit murder and then suicide. Or generating CSAM.

This is an unsafe product. With a system like this, the AI company can get off without any pesky liability for their product functioning in harmful or outright criminal ways.
 
Upvote
61 (76 / -15)

graylshaped

Ars Legatus Legionis
67,891
Subscriptor++
In fairness that was nearly two years ago and the tech has become a lot better since, however I agree you should still not take the tokens outputted by an LLM at human value
To be fair, the tokens output by humans can be pretty sus, as well.

Critical thinking doesn't allow source validation to be optional.
 
Upvote
35 (36 / -1)

quamquam quid loquor

Ars Tribunus Militum
2,851
Subscriptor++
In much the way Signal uses encryption to make messages readable only to parties participating in a conversation, Confer protects user prompts, AI responses, and all data included in them
The NYT court case shows how important this is. Even if AI companies want to delete your information they legally cannot.

This is an interesting angle to leverage the efficiencies of datacenter computing vs running a local LLM.

My guess is the world will divide into public clouds and local LLMs. Unfortunately these services won't find the scale they need.
 
Upvote
13 (19 / -6)
Two years ago it was still kind of cute when it made mistakes. Not so much now.
https://meincmagazine.com/ai/2026/01/...es-after-investigation-finds-dangerous-flaws/


These aren't "moral transgressions". We have documented cases of the LLMs coaching people to commit suicide, and at least one man to commit murder and then suicide. Or generating CSAM.

This is an unsafe product. With a system like this, the AI company can get off without any pesky liability for their product functioning in harmful or outright criminal ways.
Not meaning to minimize the damage by calling it a moral transgression. I was differentiating from rules that aren't inherently moral issues (like driving with expired registration for instance). My point was that relying on technical solutions (e.g., we'll just watch and track everything) doesn't have a good track record of actually stopping the harm. I think it's pretty clear that holding folks accountable with all of this out in the open is far from a slam dunk and I don't really believe that we would do that much worse holding people accountable if we encrypted things. The problem isn't access to data, it's a systemic issues that's better solved (in my opinion) by addressing norms and the broader culture.

By analogy, do we really think metal detectors are the ideal solution to school violence?
 
Upvote
6 (22 / -16)

hillspuck

Ars Scholae Palatinae
2,179
Not meaning to minimize the damage by calling it a moral transgression. I was differentiating from rules that aren't inherently moral issues (like driving with expired registration for instance). My point was that relying on technical solutions (e.g., we'll just watch and track everything) doesn't have a good track record of actually stopping the harm. I think it's pretty clear that holding folks accountable with all of this out in the open is far from a slam dunk and I don't really believe that we would do that much worse holding people accountable if we encrypted things. The problem isn't access to data, it's a systemic issues that's better solved (in my opinion) by addressing norms and the broader culture.

By analogy, do we really think metal detectors are the ideal solution to school violence?
By analogy, do you really think making non-metal guns that can avoid metal detectors are an ideal solution for protecting second amendment rights?
 
Upvote
2 (14 / -12)
Interesting, I use Claude for coding assistance, but otherwise refuse to use LLMs for much else (I like writing and reading on my own, and I have yet to be unable to just find a webpage with the information I need), but for people who like the chat style of LLMs, this sounds like an improvement.

I apparently don't share the blanket dislike of LLMs that most other people on this site do, to the point that I think giving users better privacy protections is a good thing, and it's nice to know that some groups are developing solutions to that end.
 
Last edited:
Upvote
23 (27 / -4)
This is addressing aspects of the brazen privacy invasions of ai models and how to address some of the concerns around that, not about how trust-worthy an ai model is or how open to abuse it is. Ai service providers should be subject to guardrails and abuse checking etc, .. the abuse should be cut off at source. Garbage in, garbage out and all that. However users should be able to control what data service providers have access to and what they keep. Ai shouldn't assume all your data is theirs for the taking.
 
Upvote
16 (17 / -1)
Two years ago it was still kind of cute when it made mistakes. Not so much now.
https://meincmagazine.com/ai/2026/01/...es-after-investigation-finds-dangerous-flaws/


These aren't "moral transgressions". We have documented cases of the LLMs coaching people to commit suicide, and at least one man to commit murder and then suicide. Or generating CSAM.

This is an unsafe product. With a system like this, the AI company can get off without any pesky liability for their product functioning in harmful or outright criminal ways.
Coaching someone to commit suicide sounds like an obvious moral transgression. Are you trying to say it is more than that or not that?
 
Upvote
-4 (8 / -12)

hillspuck

Ars Scholae Palatinae
2,179
Coaching someone to commit suicide sounds like an obvious moral transgression. Are you trying to say it is more than that or not that?
More than that. Calling it a "moral transgression" (rather than just "a crime" or "a tort") is minimizing it. It's like saying a drug manufacturer not listing suicidal thoughts as a side-effect on a label is committing a "moral transgression."

Some crimes/torts are not moral transgressions at all. Accidentally speeding. Statutory rape of someone who is the same age and mental capability of you. Selling alcohol to someone who looks 28 but is actually 20.
 
Last edited:
Upvote
20 (24 / -4)
There are a large number of claims here, and the biggest one (which I don't believe till I see it proven far beyond a reasonable doubt) is that the users' queries and results won't end up in some model accessible to others.

It's been a pretty straightforward thing, over and over, to prove LLMs encode entire tracts, be they books, files in a codebase, etc. It'll be pretty straightforward for the grey hats to figure out how to prove (or maybe disprove, hard as that is for a negative) that the systems people are talking to in Confer aren't taking their conversations and stuffing them in the magic box.
 
Upvote
-9 (1 / -10)
LLM have many externalities, chefly among them are the environnemental/ecological ones (training is ressource intensive), moral/ethical ones (stealing from artists for the training part) and privacy ones. This project address only the latter and left the other ones untouched. Privacy should be by default, not a nice to have optionnal feature, so I'm not wowed by the E2E here. So I'm not sold on the project of a Nth LLM assistant which doesn't have much of a differenciating factor.
 
Upvote
8 (12 / -4)

pjladyfox

Ars Praetorian
435
Subscriptor
Every single aspect of AI as it is currently designed depends upon information fed upon it from other sources to learn. The problem with this is quite simple and that is the sourcing of said information and as we have already seen in most, if not all, cases this information is not ethically sourced. And the people doing this are hand waving this all aside in the pursuit of absolute greed and this is just but one part of the problem.

The next one becomes that to power this so called AI program takes massive amounts of resources from land, power, and even water to run. This, in turn, is slowly poisioning the areas in which these data centers that help run these are located. Like minature viruses being introduced to the local flora and fauna making everything sick and then radiating outwards affecting it touches by raising power bills and destroying jobs.

The last part is the worst part of all and that is even if, and as we have seen greed has prevented the first two from being brought to heel, we could mitigate the damage of running this AI thing we're already seeing that human greed is preventing us from even placing reasonable controls upon it. We've seen children be happily told how to hide their substance abuse before they pass from their parents, we've seen it be used as an excuse to replace jobs with substandard results, and we've seen how it's been used to rob blind artists and more of their work. I've yet to see anything good come from AI and I'm afraid that by the time we do more people will get sick, more people will die, and for I suspect very little in return for the cost we will have paid by that point.

AI needs to just be taken out to the deepest part of the ocean, along with the people who run the companies responsible for it, and dropped into it never to be seen or heard from again. But I'll just settle for the AI bubble to burst quite frankly at this point and it all just be written off as a fad which may be better in the long run.
 
Upvote
4 (13 / -9)
The key length in byte had me puzzled for an instant. Isn't expressing it in bits more standard, since not all key lengths fall neatly on powers of two?
??? Bytes has nothing to do with powers of 2 until you scale it in kibi and gibi bytes as in the memory industry due to cpu/uc addressing. Bytes are also commonly scaled in metric for linear storage.

8 bits in a byte. 256, 1024, 2048 all divisible cleanly by and conveyed accurately using bytes if the author desires
 
Upvote
5 (6 / -1)

jdale

Ars Legatus Legionis
18,333
Subscriptor
Interests me that the first few responses seem to feel that encryption for LLMs might not be a good thing because it interferes with regulation. I disagree with that sentiment. I have long felt, and continue to feel, that a technical solution to moral transgressions is a fool's errand. Much better fought culturally by building consensus about what to do when these are encountered and lowering barriers to reporting than trying to put yet another invasive task on the shoulders of already drowning regulators/enforcers.
It's not "lowering barriers to reporting" when you put the data where it is not subject to investigation.

That's deciding that an LLM can only cause harm to the user, and not e.g., copyright infringement, use for generating spam or running fraud, creating malware, etc. And it's also deciding that there should be no evidence when the harm to the user is fatal (e.g., encouraging suicide).
 
Upvote
6 (9 / -3)

Taboobat

Smack-Fu Master, in training
60
Subscriptor
Interests me that the first few responses seem to feel that encryption for LLMs might not be a good thing because it interferes with regulation. I disagree with that sentiment. I have long felt, and continue to feel, that a technical solution to moral transgressions is a fool's errand. Much better fought culturally by building consensus about what to do when these are encountered and lowering barriers to reporting than trying to put yet another invasive task on the shoulders of already drowning regulators/enforcers.
I'm definitely mixed on this. On the one hand I think things like Signal are critical. People need to be able to have fully private conversations, and people are the ones who benefit from secure messaging.

On the other hand, Confer seems to be a huge legal boon to the LLM provider as well as to the user. The user gets privacy, which is great, but the LLM gets shielded from the many issues, and possibly crimes, that they have. "Oh, the model is infringing on copyrights? It's producing CSAM? It's literally killing vulnerable people? Well, we had no way to know, we can't see our own outputs!". I mean Musk's response to Grok creating CSAM was to make it DM you instead of posting it publicly, their position is if it isn't public it's not a crime.

It's possible that LLMs will eventually land somewhere relatively safe and stable but right now they need scrutiny. ChatGPT has killed at least half a dozen people (real number likely much higher -- those are only the ones that have filed lawsuits) and when that happens if the chat logs are unrecoverable then they won't face consequences and won't have pressure to make their models safer.

Lastly, regarding this quote:
“It’s been really interesting and encouraging and amazing to hear stories from people who have used Confer and had life-changing conversations, in part because they haven’t felt free to include information in those conversations with sources like ChatGPT or they had insights using data that they weren’t really free to share with ChatGPT before but can using an environment like Confer.”
This is, frankly, kind of terrifying. It's going to be hard to convince me that people are having "life-changing conversations" with LLMs by sharing their most private details is good thing. Many people are already using LLMs as therapists and having a sycophantic, hallucinatory, corporate "therapist" that's trained on 99% internet garbage is a bad, bad, idea. Which, I suppose, is why people keep killing themselves or experiencing AI psychosis. I'm sure that some people are getting some benefit from it but if a human therapist induced just a single patient to kill themselves one time they'd be in jail forever, not getting billions of dollars of investment money.
 
Upvote
27 (31 / -4)

Jeff S

Ars Legatus Legionis
11,018
Subscriptor++
I wonder if Marlinspike will make the same sort of critical fundamental design error that he did with Signal, with Confer?

What I mean by that is, while I use Signal (mostly because it's what other people use) I have always LOATHED that Signal is tied to phone numbers. Yes, you can get a username that you can share with people so you don't have to share your phone number (this was not the case for the first few years of Signal, btw), but even so, underneath, Signal is still fundamentally tied to a phone number.

Which means that the phone company/government can kill your Signal account at any time by invalidating your phone service/number.

I don't think an Internet communication service should be tied to a legacy phone number, not in a way where it's dependent upon it to work.

Also, a phone number can be stolen, and while yes, your Signal contacts will get a message that the safety number has changed, I suspect most people don't really have a good plan in place for what to do when they see that message, and will likely just accept that the person on the other end is the person they think it is based on what their phone shows from their contacts list for that phone number's name.
 
Last edited:
Upvote
31 (31 / 0)

dangoodin

Ars Tribunus Militum
1,646
Ars Staff
LLM have many externalities, chefly among them are the environnemental/ecological ones (training is ressource intensive), moral/ethical ones (stealing from artists for the training part) and privacy ones. This project address only the latter and left the other ones untouched. Privacy should be by default, not a nice to have optionnal feature, so I'm not wowed by the E2E here. So I'm not sold on the project of a Nth LLM assistant which doesn't have much of a differenciating factor.
The E2EE IS the differentiating factor. It protects the inference process. What ther LLM does that?
 
Upvote
3 (4 / -1)

Fred Duck

Ars Tribunus Angusticlavius
7,234
Dan Goodin said:
Moxie Marlinspike—the pseudonym of an engineer who set a new standard for private messaging with the creation of the Signal Messenger—is now aiming to revolutionize AI chatbots in a similar way.
Oh, that's a relief. I was afraid people were naming their offspring after fizzy drinks.

I like Marlinspike. He's got moxie.

Did you know? ONE HUNDRED YEARS ago, Moxie was more popular than Coca-Cola. Now, Coca-Cola own Moxie.
 
Upvote
0 (2 / -2)

drnick1

Wise, Aged Ars Veteran
259
As soon as you test it / try to push it out of bounds, you repeatedly get a "sorry, can't help with that," which is good.
No, it's not good. LLMs should not arbitrarily censored. These days, it's already hard to ask GPT "how to download torrents?" without being served half a page of moralizing bs before getting to the technical part.
 
Upvote
-17 (7 / -24)