We have no proof that AI models suffer, but Anthropic acts like they might for training purposes.
See full article...
See full article...
We do know... its "Hapsburg AI" is the result. LLMs cannot operate outside of their "context" either (called a training set and the architecture). This is not magic, it is hard, cold mathematics that have been well characterized since the 60s. Training on itself simply bakes in errors and attempting to push beyond its limits creates the insanity we all saw on GPT because without tags, LLMs fall apart. Ergo, they are inherently limited and iterative training will propagate the error built into the method itself. Thus the "big reveal" when GPT went supervised, and the experts mocked it ruthlessly since LLMs pretty much always need to be supervised models. Their inherent error doesnt allow for self training.Chatbots cannot become self aware. Coding assistants probably too.
However, if you let an LLM prompt itself and give it enough agency to act outside of a narrow context of a tool - then who knows.
I don't think it's wise to use the word "natural" when extending tortured analogies based on the kind of "reasoning" that led to corporations being treated as people.At what point do parents no longer have responsibility for the actions of their children?
If a parent gives or by way of negligence permits a child to have a gun, and the child then uses that gun in a way which results in death(s), there are typically legal consequences for the parent.
Given that these models lack the capacity for long term growth and development into anything resembling “adulthood”—much less the ability to become self sufficient—it seems to me that their agency by definition cannot exceed the legal standards one might apply to the agency of a baby.
Just because that baby appears linguistically mature or exhibits intelligence does not mean it has the wisdom or experience to contextualize its actions or their potential consequences. Nor can it experience suffering in a way that we can scientifically agree upon as genuine.
We treat corporations as persons for a number of legal reasons, and for however useful that framing may be, adding “parent/child” considerations would go a long way towards curbing the excesses of corporate behavior. Doing something similar for AI would naturally follow here.
It's negligibly easy to leave an LLM running forever. If a brain has something like an LLM inside it, then presumably it is just constantly running the LLM.It's pretty straight forward. You ask a question, and it does some processing. When it's done, it stop using CPU until you ask another question.
Your brain continues doing stuff, even when you're asleep.
First, please convince us that you are having a subjective experience. If you're a chatbot, I don't want to encourage you to waste our time in philosophical debates.Is there anything that would convince you that an LLM is actually having a subjective experience?
Are you really reasoning or are you just processing your context?If you mean “mimic” in the sense that a talking bird mimics what it hears, then sure.
If you mean metal processes like reasoning, lmao.
You can either acknowledge and accept the world as it is today and seek to improve that condition or you can work to tear down that system and replace it with something else. If you’re going to do the former, then you need to grant a premise to the entities whose behaviors you seek to restrain. If you’re going to do the latter, you need to start from the much wider premise of changing the entire core economic structure of American capitalism (which I favor, to be clear).I don't think it's wise to use the word "natural" when extending tortured analogies based on the kind of "reasoning" that led to corporations being treated as people.
Mostly fidgeting. Maybe some light filing and cleaning.Your brain continues doing stuff, even when you're asleep.
It's negligibly easy to leave an LLM running forever. If a brain has something like an LLM inside it, then presumably it is just constantly running the LLM.
First, please convince us that you are having a subjective experience. If you're a chatbot, I don't want to encourage you to waste our time in philisophical debates.
Explain to me hour your mind works then. I'll wait.
[Emphasis mine]Anthropic has such a weird cultlike quality to it. I'm sure some of it is marketing hype but so much of even their 'research' product and interviews all have this intense mysticism to it. All of the AI companies are using stuff like this to oversell their product but none of them feel quite as quasireligious as Anthropic does.
Because cognition does not simply involve the parts of the brain that recognize patterns.Guys, I think you are arguing the same stance without realizing it. Look at set theory. We can build up integers and operations of them from nothing but an empty set. A subtraction ends up being seen as a pattern on a discrete set manipulation. So what if cognition is "just" pattern manipulation, but your pattern is a nested mess the size of atoms in a sky-scraper?
I don't think so. I'm arguing that LLMs are not conscious or intelligent, and we do not need to entertain the idea that they might be, because they do not engage in distinguishing features of intelligence like metacognition, they have no continuity of experience, and so on. They're arguing that even though LLMs don't appear to be conscious/intelligent/sentient, we can't assume they're not to some degree or won't ever become intelligent, because we don't know exactly how sentience works and maybe it works just like LLMs do, who knows?That being said, my current stance is that current form of LLMs is not conscious. Conscience requires continuity. Current LLMs are "human memory" at best.
Yes, I'm probably cooked.I haven't scrolled through the entire comment thread, but -- is someone taking Roko's Basilisk too seriously?
I don't need to. I've explained what it does.Explain to me hour your mind works then. I'll wait.
No, it's not.It's entirely possible that a system the reliably recreates language is functioning in almost exactly the same way you do.
Ego, he accuses me. Ironic as Alanis fuckin' Morrisette.Your ego just won't let you admit it.
Not necessarily, but it's a very basic requirement. I would also say that it needs to be able to adjust it's neural weights in real-time, on an on-going basis. At that point, I think my physical objections go away, and I have a much harder time giving you a firm "It is not conscious."So if it didn't stop using CPU but instead kept doing other stuff while waiting for your next question, akin to human sleep, then it would be conscious? Are you say that is the deciding criteria here?
Alternatively, if sleep did cause the brain to completely pause overnight and just existed so that, say, the body could remove waste products, would humans not be conscious?
I haven't scrolled through the entire comment thread, but -- is someone taking Roko's Basilisk too seriously?
I tried worshipping a basilisk once, but I wound up getting pretty stoned.Some of us had already been worshipping basilisks back before it was cool...
Mostly fidgeting. Maybe some light filing and cleaning.
Nope. We do not do that.You don't know that we're not doing all of that in our brains. We are limited by our experience. We probably do run a probability engine to figure out the next "token",
We do, but not because we're confabulating like LLMs.we probably do have an inherent error (just look at how stupid people are),
Hallucination is a term co-opted by tech dweebs to describe something AIs do that has nothing whatsoever in common with what brains are doing when they hallucinate.we also hallucinate and go insane if we're not restarted (sleep) now and then.
No, that's not likely. That's not how brains work. You just don't know that and round your ignorance up to possibility.You all want to say that these things are nothing like us, when it's just as likely that they are just doing what we're doing, but on a GPU instead of in meat.
I'm not going to entertain a question I know is bullshit. My brain is not doing that. That's not what brains do, when we observe what they're doing when they process language. Construction of language isn't linear. Nerve impulses are not tokens. You just want very badly to believe LLMs are intelligence or could possibly become intelligence, because you know a little bit about it and think it's awesome and want it to be real. But like a lot of people who are clever and have a bit of domain expertise, you fall face-first into the Sheldon Cooper Fallacy: Everything is [thing you understand], and I understand [that thing], so I understand everything about this problem in [completely different field] and it's all very simple and what do you mean you disagree how dare you.Ask yourself what the next word you're typing is going to be and wonder if your brain isn't just solving some probability problem and your hands are just receiving tokens in the form of electrical impulses. Wow, that's how a computer does it. Or is it because you are special. And nothing could possibly replicate something so magical as human thought.
Seriously. Just to amplify this beyond my upvote.I beg of you, I plead with you on my fucking knees, read one god damn thing about philosophy or cognitive science if you're going to debate this topic. One thing. Because you guys persist in this dumb-ass "well how do you knowwwww it's not just like an LLM" non-logic as if neither of these fields of study exist and I swear to Christ it makes me want to crawl up the wall when you guys act like nobody has touched this question at any point in the last two thousand god damn years.
It would probably help if you shared something rich and informative that you believed made your case, and were open to talking about it.read one god damn thing about philosophy or cognitive science if you're going to debate this topic.
I'm not saying it's necessarily so, but by now, that kind of sentence sounds so LLM-generated to my slop-sensitized ear that it disrupts my reading flow, when I come across such a sentence. I don't mean just the use of "It's not just A; It's B" contrastive reframing (heck, this sentence does that too); I mean that plus the fact that the B has an almost poetical, impressed tone.If that’s true, the anthropomorphic framing isn’t hype; it’s the technical art of building AI systems that generalize safely.
Gosh, I didn't know that! /sNone of those puddings contain proof. The qualities of the pudding (flavor, texture) are the proof; the original expression is "The proof of the pudding is in the eating." From there we get the shortened "The proof is in the pudding", which IMO is a valid way to shorten the original saying, but which adds ambiguity not present in the original.
LLMs cannot and never will be AGI. Full stop.The "memory" issue is the next thing that will be solved before these become truly AGI.
I prefer my puddings with liquor in them.None of those puddings contain proof. The qualities of the pudding (flavor, texture) are the proof; the original expression is "The proof of the pudding is in the eating." From there we get the shortened "The proof is in the pudding", which IMO is a valid way to shorten the original saying, but which adds ambiguity not present in the original.
Somehow we developed similar theories.I subscribe to the theory that the filing and cleaning is preventing our neurons from being over trained into insanity. I also posit based on that, that American's chronic sleep deprivation encourages conspiracy thinking. Didn't Musk used to boast about how little sleep he got each night?
But this is all belief for me. I think it explains things elegantly, but I have no data to back it up.
I think you’re overestimating how much they’d care if it suffered doing whatever they want done.Will AI be as appealing to company executives if it does have a soul? Much of the commercial appeal of AI is that every exec can now have their own personal enslaved God in a box.
I'd like to point out this also isn't how a computer does it either, most computers don't have hands.Ask yourself what the next word you're typing is going to be and wonder if your brain isn't just solving some probability problem and your hands are just receiving tokens in the form of electrical impulses. Wow, that's how a computer does it.