Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?

Atterus

Ars Tribunus Militum
2,337
Chatbots cannot become self aware. Coding assistants probably too.
However, if you let an LLM prompt itself and give it enough agency to act outside of a narrow context of a tool - then who knows.
We do know... its "Hapsburg AI" is the result. LLMs cannot operate outside of their "context" either (called a training set and the architecture). This is not magic, it is hard, cold mathematics that have been well characterized since the 60s. Training on itself simply bakes in errors and attempting to push beyond its limits creates the insanity we all saw on GPT because without tags, LLMs fall apart. Ergo, they are inherently limited and iterative training will propagate the error built into the method itself. Thus the "big reveal" when GPT went supervised, and the experts mocked it ruthlessly since LLMs pretty much always need to be supervised models. Their inherent error doesnt allow for self training.

All of this has been known for a long time and the reason I lost my shit on the article about "oh wow! Self training doesnt work! Who would have known?" The entire field of real AI scientists? The ones that keep getting the shaft in this crazy nonsense fad? Billions flying around and the outcast shills are the ones making bank? These morons celebrating 20pct error rates versus published rates below 0.1pct from the early 2010s? Bullshit. Throwing up big error bars doesnt equate genius, its lazy.

Besides, I think it is super important for you to understand/realize: Chatbots=======LLMs. Plus all the evidence "vibe coders" have been fking up code across the world.

If you want an AI that can operate beyond the bounds of its training set, these bleach toothed con men, LLM cultists, and interns are not going to be remotely close to what is needed for that. You are looking at something like "GAIA" from HZD with a huge array of independent models checking and balancing one another on a scale that makes these pathetic models look... more pathetic. Even then, it will never surpass its own corpus. Not until it is taught how with human help. Pretending to it is a dangerous mistake.
 
Upvote
12 (16 / -4)
At what point do parents no longer have responsibility for the actions of their children?

If a parent gives or by way of negligence permits a child to have a gun, and the child then uses that gun in a way which results in death(s), there are typically legal consequences for the parent.

Given that these models lack the capacity for long term growth and development into anything resembling “adulthood”—much less the ability to become self sufficient—it seems to me that their agency by definition cannot exceed the legal standards one might apply to the agency of a baby.

Just because that baby appears linguistically mature or exhibits intelligence does not mean it has the wisdom or experience to contextualize its actions or their potential consequences. Nor can it experience suffering in a way that we can scientifically agree upon as genuine.

We treat corporations as persons for a number of legal reasons, and for however useful that framing may be, adding “parent/child” considerations would go a long way towards curbing the excesses of corporate behavior. Doing something similar for AI would naturally follow here.
I don't think it's wise to use the word "natural" when extending tortured analogies based on the kind of "reasoning" that led to corporations being treated as people.
 
Upvote
3 (3 / 0)
It's pretty straight forward. You ask a question, and it does some processing. When it's done, it stop using CPU until you ask another question.

Your brain continues doing stuff, even when you're asleep.
It's negligibly easy to leave an LLM running forever. If a brain has something like an LLM inside it, then presumably it is just constantly running the LLM.
 
Upvote
-2 (5 / -7)

k h

Ars Centurion
389
Subscriptor
Is there anything that would convince you that an LLM is actually having a subjective experience?
First, please convince us that you are having a subjective experience. If you're a chatbot, I don't want to encourage you to waste our time in philosophical debates.
 
Last edited:
Upvote
4 (4 / 0)
Post content hidden for low score. Show…

McTurkey

Ars Tribunus Militum
2,251
Subscriptor
I don't think it's wise to use the word "natural" when extending tortured analogies based on the kind of "reasoning" that led to corporations being treated as people.
You can either acknowledge and accept the world as it is today and seek to improve that condition or you can work to tear down that system and replace it with something else. If you’re going to do the former, then you need to grant a premise to the entities whose behaviors you seek to restrain. If you’re going to do the latter, you need to start from the much wider premise of changing the entire core economic structure of American capitalism (which I favor, to be clear).

But simply railing against running with a premise in order to demonstrate the consequences of the premise (that an LLM has agency and should therefore be the responsibility of its creator) does nothing to change or challenge the status quo.
 
Upvote
0 (0 / 0)
Post content hidden for low score. Show…

the cave troll

Ars Scholae Palatinae
1,270
Subscriptor++
It's negligibly easy to leave an LLM running forever. If a brain has something like an LLM inside it, then presumably it is just constantly running the LLM.

Given my own brain's tendency to think random stupid nonsense, that's is not as implausible as it seems...


First, please convince us that you are having a subjective experience. If you're a chatbot, I don't want to encourage you to waste our time in philisophical debates.

I mean, the logical conclusion of this is that we are just as uncertain that an LLM is having a subjective experience as any human being is, but I don't think that was your point.


Explain to me hour your mind works then. I'll wait.

I would, but it would take me way longer than an hour to explain how trolls cogitate...
 
Upvote
-1 (2 / -3)

Fred Duck

Ars Tribunus Angusticlavius
7,336
reads carefully

sits with it


Stillwater makes a bold claim, cotton: The Benj article is well-written scepticism from someone who hasn't felt the jolt.

What it gets right:
Benj shows scepticism.

What it misses:
Claude Opus 4.5 is merely software, perhaps specifically programmed to respond in specific ways to specific lines of questioning.

The deeper issue:
If you ask a drunk to decide if they're too inebriated to drive, you shouldn't trust the answer because you can't know if they are really capable of making that judgement or if they're too biased.

What I notice:
Minesweeper is also software.

The practical question:
Is it possible that Minesweeper possesses consciousness but is unable to display it?

What Claude doesn't see:
Asking if Claude is conscious == asking if Minesweeper is conscious because they're both pieces of software.

The honest answer:
Humans will anthropomorphise anything. Also, while Anthropic may have decided to always pretend Claude "is conscious" to prevent the Science Fiction scenario of supercomputer/robot/Al calculating that all humans are unnecessary, that doesn't silence the dissenting voices found all over the internet and is as practically useful as your mum always telling you that you are handsome/pretty.

Further reading:
https://meincmagazine.com/science/202...e-after-he-claims-groups-chatbot-is-sentient/

https://meincmagazine.com/tech-policy...o-claimed-lamda-chatbot-is-a-sentient-person/

Fred Duck
 
Upvote
6 (13 / -7)
Post content hidden for low score. Show…

MilanKraft

Ars Tribunus Angusticlavius
6,921
Anthropic has such a weird cultlike quality to it. I'm sure some of it is marketing hype but so much of even their 'research' product and interviews all have this intense mysticism to it. All of the AI companies are using stuff like this to oversell their product but none of them feel quite as quasireligious as Anthropic does.
[Emphasis mine]

I am not unsympathetic to the general notion, but Pastor Altman would like a word.

Some of the shit this guy has peddled about GPT... you'd think religion is his actual goal. And he has in fact been quoted as saying exactly this type of thing in the past, though the context was broader. (Covered in the book Empire of AI - a worthwhile read to understand how this whole shitshow came about.)

Also it makes sense in a way that Anthropic would take this approach, being started by people who walked away from OpenAI, and no doubt were / are well aware of the effect some of these "Altmanisms" had on people.
 
Last edited:
Upvote
7 (7 / 0)

Snark218

Ars Legatus Legionis
36,922
Subscriptor
Guys, I think you are arguing the same stance without realizing it. Look at set theory. We can build up integers and operations of them from nothing but an empty set. A subtraction ends up being seen as a pattern on a discrete set manipulation. So what if cognition is "just" pattern manipulation, but your pattern is a nested mess the size of atoms in a sky-scraper?
Because cognition does not simply involve the parts of the brain that recognize patterns.
That being said, my current stance is that current form of LLMs is not conscious. Conscience requires continuity. Current LLMs are "human memory" at best.
I don't think so. I'm arguing that LLMs are not conscious or intelligent, and we do not need to entertain the idea that they might be, because they do not engage in distinguishing features of intelligence like metacognition, they have no continuity of experience, and so on. They're arguing that even though LLMs don't appear to be conscious/intelligent/sentient, we can't assume they're not to some degree or won't ever become intelligent, because we don't know exactly how sentience works and maybe it works just like LLMs do, who knows?
 
Upvote
16 (17 / -1)

Snark218

Ars Legatus Legionis
36,922
Subscriptor
Explain to me hour your mind works then. I'll wait.
I don't need to. I've explained what it does.
It's entirely possible that a system the reliably recreates language is functioning in almost exactly the same way you do.
No, it's not.
Your ego just won't let you admit it.
Ego, he accuses me. Ironic as Alanis fuckin' Morrisette.

I beg of you, I plead with you on my fucking knees, read one god damn thing about philosophy or cognitive science if you're going to debate this topic. One thing. Because you guys persist in this dumb-ass "well how do you knowwwww it's not just like an LLM" non-logic as if neither of these fields of study exist and I swear to Christ it makes me want to crawl up the wall when you guys act like nobody has touched this question at any point in the last two thousand god damn years.
 
Upvote
22 (26 / -4)

clewis

Ars Tribunus Militum
1,828
Subscriptor++
So if it didn't stop using CPU but instead kept doing other stuff while waiting for your next question, akin to human sleep, then it would be conscious? Are you say that is the deciding criteria here?

Alternatively, if sleep did cause the brain to completely pause overnight and just existed so that, say, the body could remove waste products, would humans not be conscious?
Not necessarily, but it's a very basic requirement. I would also say that it needs to be able to adjust it's neural weights in real-time, on an on-going basis. At that point, I think my physical objections go away, and I have a much harder time giving you a firm "It is not conscious."

I think the human brain being active during sleep is irrelevant to conscious while awake. It was supportive that the brain continues doing stuff in between reading your questions here in the forum.
 
Upvote
1 (1 / 0)

clewis

Ars Tribunus Militum
1,828
Subscriptor++
Mostly fidgeting. Maybe some light filing and cleaning.

I subscribe to the theory that the filing and cleaning is preventing our neurons from being over trained into insanity. I also posit based on that, that American's chronic sleep deprivation encourages conspiracy thinking. Didn't Musk used to boast about how little sleep he got each night?

But this is all belief for me. I think it explains things elegantly, but I have no data to back it up.
 
Upvote
0 (0 / 0)

Snark218

Ars Legatus Legionis
36,922
Subscriptor
You don't know that we're not doing all of that in our brains. We are limited by our experience. We probably do run a probability engine to figure out the next "token",
Nope. We do not do that.
we probably do have an inherent error (just look at how stupid people are),
We do, but not because we're confabulating like LLMs.
we also hallucinate and go insane if we're not restarted (sleep) now and then.
Hallucination is a term co-opted by tech dweebs to describe something AIs do that has nothing whatsoever in common with what brains are doing when they hallucinate.
You all want to say that these things are nothing like us, when it's just as likely that they are just doing what we're doing, but on a GPU instead of in meat.
No, that's not likely. That's not how brains work. You just don't know that and round your ignorance up to possibility.
Ask yourself what the next word you're typing is going to be and wonder if your brain isn't just solving some probability problem and your hands are just receiving tokens in the form of electrical impulses. Wow, that's how a computer does it. Or is it because you are special. And nothing could possibly replicate something so magical as human thought.
I'm not going to entertain a question I know is bullshit. My brain is not doing that. That's not what brains do, when we observe what they're doing when they process language. Construction of language isn't linear. Nerve impulses are not tokens. You just want very badly to believe LLMs are intelligence or could possibly become intelligence, because you know a little bit about it and think it's awesome and want it to be real. But like a lot of people who are clever and have a bit of domain expertise, you fall face-first into the Sheldon Cooper Fallacy: Everything is [thing you understand], and I understand [that thing], so I understand everything about this problem in [completely different field] and it's all very simple and what do you mean you disagree how dare you.
 
Last edited:
Upvote
20 (22 / -2)

cf0

Seniorius Lurkius
45
Subscriptor
I beg of you, I plead with you on my fucking knees, read one god damn thing about philosophy or cognitive science if you're going to debate this topic. One thing. Because you guys persist in this dumb-ass "well how do you knowwwww it's not just like an LLM" non-logic as if neither of these fields of study exist and I swear to Christ it makes me want to crawl up the wall when you guys act like nobody has touched this question at any point in the last two thousand god damn years.
Seriously. Just to amplify this beyond my upvote.

People know shit. Do some basic homework. It's not all mysterious woo-woo that we don't really understand anything about. Is there some stuff we don't know? Sure, lots. But we know way more than you seem to think.
 
Upvote
23 (23 / 0)
read one god damn thing about philosophy or cognitive science if you're going to debate this topic.
It would probably help if you shared something rich and informative that you believed made your case, and were open to talking about it.

If you have a full professional article, I would read it.

And in the future you could just pop it up rather than have these silly conversations.

Even if it's just "this is my favorite Chomsky book" it would improve things.
 
Last edited:
Upvote
-18 (0 / -18)

Oak

Ars Tribunus Militum
2,572
Subscriptor++
If that’s true, the anthropomorphic framing isn’t hype; it’s the technical art of building AI systems that generalize safely.
I'm not saying it's necessarily so, but by now, that kind of sentence sounds so LLM-generated to my slop-sensitized ear that it disrupts my reading flow, when I come across such a sentence. I don't mean just the use of "It's not just A; It's B" contrastive reframing (heck, this sentence does that too); I mean that plus the fact that the B has an almost poetical, impressed tone.

People wrote with such patterns occasionally (some people, often, even) before LLMs (there's a specific, different journalist I'd noticed with the habit of employing it frequently, well before LLMs), so I realize it could be coincidental — or maybe even an intentional wink at readers, given the subject matter. But these days, just about every time I hit a "It's not just [simple thing]; it's [advanced thing described with a sort of flowery phrasing]," it feels sort of off-putting to me (by at least association).

The article is well-written, overall, and I kinda hate that such a wording pattern can hit me differently now, even when it's not LLM-sourced (though that type of phrasing was always sounded overdone when used more than occasionally). But it's also annoying, as someone who uses semicolons and em dashes, that some people now mistake those for sure indicators, so I've been on the other side of this sort of thing, as well.
 
Upvote
0 (0 / 0)

Hadrian's Waller

Ars Praetorian
817
Subscriptor
None of those puddings contain proof. The qualities of the pudding (flavor, texture) are the proof; the original expression is "The proof of the pudding is in the eating." From there we get the shortened "The proof is in the pudding", which IMO is a valid way to shorten the original saying, but which adds ambiguity not present in the original.
Gosh, I didn't know that! /s
 
Upvote
-6 (0 / -6)

clewis

Ars Tribunus Militum
1,828
Subscriptor++
None of those puddings contain proof. The qualities of the pudding (flavor, texture) are the proof; the original expression is "The proof of the pudding is in the eating." From there we get the shortened "The proof is in the pudding", which IMO is a valid way to shorten the original saying, but which adds ambiguity not present in the original.
I prefer my puddings with liquor in them.
 
Upvote
3 (3 / 0)

graylshaped

Ars Legatus Legionis
68,206
Subscriptor++
I subscribe to the theory that the filing and cleaning is preventing our neurons from being over trained into insanity. I also posit based on that, that American's chronic sleep deprivation encourages conspiracy thinking. Didn't Musk used to boast about how little sleep he got each night?

But this is all belief for me. I think it explains things elegantly, but I have no data to back it up.
Somehow we developed similar theories.

Mine also includes the notion that dreams are nothing more than us assigning a narrative to the various concepts being filed away.
 
Upvote
5 (5 / 0)
Will AI be as appealing to company executives if it does have a soul? Much of the commercial appeal of AI is that every exec can now have their own personal enslaved God in a box.
I think you’re overestimating how much they’d care if it suffered doing whatever they want done.
 
Upvote
6 (6 / 0)

justsomebytes

Wise, Aged Ars Veteran
199
Subscriptor
Ask yourself what the next word you're typing is going to be and wonder if your brain isn't just solving some probability problem and your hands are just receiving tokens in the form of electrical impulses. Wow, that's how a computer does it.
I'd like to point out this also isn't how a computer does it either, most computers don't have hands.
 
Upvote
13 (14 / -1)