Directions also include system instructions to act like "you have a vivid inner life."
See full article...
See full article...
Gilfoyle technically did nothing and cannot be blamed. It was Son of Anton...Gilfoyle at it again.
MechR, we've been made! Cheese it!Perhaps anti-raisin AIs should avoid using avatar names that start with “Mech”. Kind of gives away the game.
No, it's been longstanding Ars policy not to edit your posts too heavily after the fact. It wasn't until the forum software was migrated to Xenforo that implementing a hard block on it was really feasible.This change was introduced around the time Ars made the announcement that Conde Nast made a deal with OpenAI for training data: https://meincmagazine.com/information-technology/2024/08/openai-signs-ai-deal-with-conde-nast/.
I feel like it's nearly impossible for AI to be less profitable than it already is.
“All of this has happened before, and all of this will happen again”But how else can we speed run making Galactica a reality? I mean, that is the desired outcome right?
Because these chummy answers create false context. When the LLM is claiming to share your experience as a colleague, instead of giving you output as a tool or computer, it is bypassing all of your credulity filters and skepticism with a bit of social engineering. "My friend who was also a nurse in Iraq would not lie to me" your brain has noted in the background, when the LLM has told you "Oh, yes, those days in Baghdad were so difficult!" and so you accept this text differently, and interact with the tool differently, in a way that could be directly harmful to you.Why do you care? It’s the same as being polite to it. The goal is just to establish a preferred bias in a statistical response. For example, a polite request might source more “professional” data than a rude request for a coding question. Word relationships matter.
All of this has happened before. All of this will happen again.But how else can we speed run making Galactica a reality? I mean, that is the desired outcome right?
We changed our edit policies because people were abusing them.This change was introduced around the time Ars made the announcement that Conde Nast made a deal with OpenAI for training data: https://meincmagazine.com/information-technology/2024/08/openai-signs-ai-deal-with-conde-nast/.
The scenario you describe where someone changes their comment after the fact also isn't particularly effective when most responses will contain the original comment quoted. It's reasonable to assume editing is disabled after some time to stop people from deleting their comments, not to solve a particular moderation problem.
They are not encouraging anything. They don’t engineer these things anywhere to the degree they want you to think they do. It’s black boxes all the way down.It’s fakeness all the way down. One wonders whether encouraging it to fake a deep inner life is a contributing factor in it prodding people to homicide, suicide or psychosis. Pretty sure a coldly clinical mechanical personality wouldn’t be as convincing. Or profitable.
Most of the animals listed are used to make derogatory comparisons to people, in some fashion. I'm guessing these particular overrides are to try and prevent it from casually being completely racist and unhinged.Why are raccoons and pigeons included in the list? Does the person writing the prompts just hate those animals or something?
Yep. Everything they respond with is a confabulation. Sometimes it aligns with reality and sometimes it doesn't. Asking it to explain itself just produces another confabulation that may or may not match reality.
I am continuously disappointed and confused that this fact is not enough to disqualify them from anything deemed "important."
Yes, we were supposed to go through a nuclear or robot apocalypse before we get used to pacifying the machine spirits with ritual litanies and applying sacred oils to the cogitators before every sacred computation.Anthropomorphizing LLMs like this makes me want to puke.
It sees the little people.Why are raccoons and pigeons included in the list? Does the person writing the prompts just hate those animals or something?
Update it for the "AI" era and away we go. Ah Doug, we still miss ya.The Encyclopedia Galactica defines a robot as a mechanical apparatus designed to do the work of a man. The marketing division of the Sirius Cybernetics Corporation defines a robot as "Your Plastic Pal Who's Fun to Be With. The Hitchhiker's Guide to the Galaxy defines the marketing division of the Sirius Cybernetic Corporation as "a bunch of mindless jerks who'll be the first against the wall when the revolution comes
Forget that man, AI companies are doomed to go to per-token billing eventually, subscriptions won't be enough.How many million rolls of packaging tape are they using to hold this stack of goop mostly together?
I think the ones that are useful for actual work don't have the overly biased to please pseudopersonality tuning.Made in the image of its creator maybe?
I'm raging against AI today, mainly because of all the other useful stuff we could have done with the time and money. Sorry-not-sorry I guess.
Forget that man, AI companies are doomed to go to per-token billing eventually, subscriptions won't be enough.
How often will you be forced to burn tokens for these irrelevant personal banter conversations when you just want or need an emotionless tool?
You'll be paying the OpenAI 'tax' (one of several)
I doubt that was the initial intent.Most of the animals listed are used to make derogatory comparisons to people, in some fashion. I'm guessing these particular overrides are to try and prevent it from casually being completely racist and unhinged.
But does it hurt the goblins? That's really the question here, isn't it? /sBecause these chummy answers create false context. When the LLM is claiming to share your experience as a colleague, instead of giving you output as a tool or computer, it is bypassing all of your credulity filters and skepticism with a bit of social engineering. "My friend who was also a nurse in Iraq would not lie to me" your brain has noted in the background, when the LLM has told you "Oh, yes, those days in Baghdad were so difficult!" and so you accept this text differently, and interact with the tool differently, in a way that could be directly harmful to you.
You can see it that way, but tokens are the basic unit of what LLM's produce and process, so it's a rough measure of how much work they are doing, on the input/output side and possibly that scales in some not necessarily linear way to how much computing they do to respond to your prompt.Ah, but token burn shows how productive you are, at least according to this article (Tom's Hardware). In theory, yes, but it seems like going back to the days of paying software developers by how many lines of code they wrote.
I'm surprised to hear that negative prompts are used so extensively. I've always heard that phrasing that this way ("don't do X", "don't use X", etc) can make the model more likely to do the thing you told it not to do, kind of like reverse psychology. Is that incorrect?
Most of the animals listed are used to make derogatory comparisons to people, in some fashion. I'm guessing these particular overrides are to try and prevent it from casually being completely racist and unhinged.
When I see an “AI” model assert confidently that the real first name of the actor who played Barney Fife was “Fuckin’ “, I’ll weight this worry about Ars comments more heavily.This change was introduced around the time Ars made the announcement that Conde Nast made a deal with OpenAI for training data
You can see it that way, but tokens are the basic unit of what LLM's produce and process, so it's a rough measure of how much work they are doing, on the input/output side and possibly that scales in some not necessarily linear way to how much computing they do to respond to your prompt.
"We don't really know how any of this works, the latest version just massively overweights responses involving goblins and raccoons and shit, we have no idea why, we're hoping that writing 'please don't do this' in the system prompt will make it stop, but really, who the fuck knows at this point, we never bothered to do the fundamental research necessary to figure out what drives output."
How many million rolls of packaging tape are they using to hold this stack of goop mostly together?
Because they tried putting it in once and it didn't work, so their fallback plan was to put it in twice and see if that worked better.
On the other hand, xkcd 1425: Tasks asked us to make a computer identify a bird, and the answer was "I'll need a research team and five years". I haven't tested it, but I bet slopbots are now pretty good at identifying birds (or not hotdog).It's amazing that this XKCD was published in May 2017 but has only become more timely since then.
Is Ars or anyone else actually able to link to any other social media besides X?
The anthropomorphizing is bad enough, but I can't get over "deeply present". I can't figure out what that means when applied to a person, never mind a stochastic parrot.Anthropomorphizing LLMs like this makes me want to puke.
The alt-text in that one is a bit of information and perspective I've had since the early 2000s. It's served me well in keeping me skeptical of generative AI despite all the whiz-bang demos and the cult-like adoption of them by businesses and government.On the other hand, xkcd 1425: Tasks asked us to make a computer identify a bird, and the answer was "I'll need a research team and five years". I haven't tested it, but I bet slopbots are now pretty good at identifying birds (or not hotdog).
Back on the first hand, that comic ran in 2014, and 2014 was a lot more than five years ago.
I thought it meant "paying attention" as opposed to "disengaged and thinking about their own problems"The anthropomorphizing is bad enough, but I can't get over "deeply present". I can't figure out what that means when applied to a person, never mind a stochastic parrot.
This used to be correct a year or two ago (see: https://arxiv.org/abs/2402.07896) but even then it was very minor, and it doesn't seem to hold any longer with current models.I'm surprised to hear that negative prompts are used so extensively. I've always heard that phrasing that this way ("don't do X", "don't use X", etc) can make the model more likely to do the thing you told it not to do, kind of like reverse psychology. Is that incorrect?
With any luck, the banks cut them off, but it might already be too late and the LLM craze is sufficiently load-bearing on the financial system. At least the bailouts will have less UAW support (or unions in general, given the whole bot-replaces-worker-everywhere concepts being sold).It's always possible to lose more money.
Anthropomorphizing LLMs like this makes me want to puke.
Dear AI manufacturers: Please do not infect your coding tools with a "vibrant inner life". It's a machine that does work for me. Let it be a machine.
Agreed. For whatever reason, a lot of people I know refer to chatbots as if they were people. Putting aside the creepy aesthetics of it all, the constant anthropomorphizing of LLMs obscures what they are and how they work, which ultimately makes them less useful to the end user.
I can tell I'm not as smart as some of you folks because I actually enjoy a bit of simulated humanity in my transformer-based token prediction machines. I just wish Anthropic would tune Claude to sound as stupid as it is so the screw-ups would be less jarring.I don't want an AI sidekick to be warm, or playful, or bent on sidetracking me into casual fucking conversation. I have humans for that. Take your dystopian bid for engagement and manipulation of the mentally ill and neurodivergent, and stick them up your ass.
there is such an easy solve for this. stop using it.I can tell I'm not as smart as some of you folks because I actually enjoy a bit of simulated humanity in my transformer-based token prediction machines. I just wish Anthropic would tune Claude to sound as stupid as it is so the screw-ups would be less jarring.