Annoyed ChatGPT users complain about bot’s relentlessly positive tone

msawzall

Ars Tribunus Angusticlavius
7,374
Alternatively, if you're fed up with GPT-4o's love-bombing, subscribers can try other models available through ChatGPT, such as o3 or GPT-4.5, which are less sycophantic but have other advantages and tradeoffs.
Or... Now this may seem like a novel approach to something you don't like to use... Just not use it. And it's free!
 
Upvote
80 (84 / -4)

Rirere

Ars Centurion
313
Subscriptor++
I genuinely wonder what they can actually do with models that remain largely black boxes other than add more background instructions to the initializing prompt to try and establish guardrails.

We know that even having the models 'reveal' their reasoning steps is hardly bulletproof and one of the whole points (both good and bad) about these things is that they're non-deterministic. There's no simple feature flag to toggle, no code to comment out.

It's also wild to me that they're asking businesses and investors to sign onto their platforms when you can have hugely impactful behavioral shifts just kinda...happen. This isn't the first time (remember the laziness thing?) and it won't be the last time that LLM 'productivity' is adversely impacted for unclear reasons.

It's like betting everything on a specific horse and rider in a race, but the horse is known to have spurious, uh...outbursts. Sometimes on the track.
 
Upvote
54 (55 / -1)

Erbium168

Ars Centurion
2,693
Subscriptor
Were you expecting Marvin (the Paranoid Android) perhaps?
As a British person, we would prefer Marvin. There's a reason D N Adams created and sent up the Sirius Cybernetics Corporation and its robots with relentlessly upbeat Real People Personalities, and then mentions that come the revolution they were indeed the first ones to be put up against a wall and shot.

We would positively welcome an AI that when asked a question starts off "You're not going to like it..."
 
Upvote
104 (104 / 0)

WereCatf

Ars Tribunus Militum
2,846
I don't use ChatGPT often, maybe like twice a month to ask a quick question or something, but I've set the following in the "Customize ChatGPT" which helps make the answers far more to my liking:

If a question can be answered with either a simple yes or a no, keep your answer short and specific. If you are unsure of an answer, say so and do not pretend to know everything as a matter of fact. Do not pretend to have feelings and minimize any extra prose and flowery language in your replies and just stick to answering the question.

I'm not looking for an AI friend, I don't need encouragement from a lifeless computer program and I am really rather annoyed by all the excess verbosity -- positive or negative, doesn't matter.
 
Last edited:
Upvote
67 (68 / -1)
Post content hidden for low score. Show…

jdale

Ars Legatus Legionis
18,333
Subscriptor
Carro's paper suggests that obvious sycophancy significantly reduces user trust. In experiments where participants used either a standard model or one designed to be more sycophantic, "participants exposed to sycophantic behavior reported and exhibited lower levels of trust."

Sounds like a good thing to me. People have too much trust in these systems, I'll all for undermining that.
 
Upvote
34 (35 / -1)

Erbium168

Ars Centurion
2,693
Subscriptor
I genuinely wonder what they can actually do with models that remain largely black boxes other than add more background instructions to the initializing prompt to try and establish guardrails.

We know that even having the models 'reveal' their reasoning steps is hardly bulletproof and one of the whole points (both good and bad) about these things is that they're non-deterministic. There's no simple feature flag to toggle, no code to comment out.

It's also wild to me that they're asking businesses and investors to sign onto their platforms when you can have hugely impactful behavioral shifts just kinda...happen. This isn't the first time (remember the laziness thing?) and it won't be the last time that LLM 'productivity' is adversely impacted for unclear reasons.

It's like betting everything on a specific horse and rider in a race, but the horse is known to have spurious, uh...outbursts. Sometimes on the track.
It is in fact yet another pseudo-scientific con at the production (not the research) level. One step up from the perpetual announcements of water-fuelled motors, one step below testing your DNA in the current state of knowledge.
But what do you expect? As China relentless overhauls, the US is increasingly dependent on IP and technological breakthroughs to stay ahead. And since mobile computing and graphics these have been thin on the ground. Quantum computing is nowhere near deploying, Tesla self-driving doesn't. The ultimate tech showman, Musk, has been trying to pivot to space and AI, but as things stand they aren't going to avoid the Trumpocalypse and the Bonfire of the Van(ce)ities.
 
Upvote
-2 (6 / -8)

dbarowy

Smack-Fu Master, in training
59
We would positively welcome an AI that when asked a question starts off "You're not going to like it..."
I was hoping that somebody would chime in with a Douglas Adams reference. My first thought when I read the headline was that our future might really include things like elevators that sigh with satisfaction at having delivered us successfully to our floor. As usual, Douglas Adams was ahead of his time...
 
Upvote
57 (57 / 0)

Erbium168

Ars Centurion
2,693
Subscriptor
I was hoping that somebody would chime in with a Douglas Adams reference. My first thought when I read the headline was that our future might really include things like elevators that sigh with satisfaction at having delivered us successfully to our floor. As usual, Douglas Adams was ahead of his time...
Aldous Huxley beat him to it. In Brave New World there is limited automation so that the upper classes (the alphas and betas) will have things to do, managing the gammas and deltas. But the gammas and deltas are scientifically raised to be happy with their lot and enjoy their simple tasks. An elevator could be controlled by buttons, but instead a happy delta takes it up and down all day before going off to simple communal games with other deltas.
It may be even more prophetic than H2G2, especially the part where the world controllers have reduced the population to a sustainable minimum to keep them in their highly privileged lifestyle.
 
Upvote
35 (35 / 0)
TARS: Absolute honesty isn’t always the most diplomatic nor the safest form of communication with emotional beings.

Cooper: Okay, 90 percent it is.
In all seriousness, this is more than an annoyance. Between this and the recent studies regarding mental and emotional reliance on AI, the sociological and mental health implications of AI are far more concerning to me than any disruption to the labor market.
 
Upvote
15 (18 / -3)

Erbium168

Ars Centurion
2,693
Subscriptor
On the plus side, lately I've been able to immediately tell when my boss sends me something straight from chatgpt becuase of its chipper tone and frequent use of emojis that are uncharastic of a gruff man in his late 50's.
I am quite sure that had we had this in the early 1990s I would have known because of the superior literacy. (How does a CEO with a degree in journalism manage to be semi-literate?)
 
Upvote
7 (10 / -3)

jranson

Wise, Aged Ars Veteran
183
I think one of my biggest gripes with this is ChatGPT telling me what a great coder I am, even when there are obvious errors in the code I share with it. Recently I pasted in a few raw algorithm functions (incl. descriptive comments) I was working on for an open source project, to get a sense of whether there might be a more performant approach when nanoseconds matter. It did initially highlight a few o^n patterns that could be reduced to o*2. But once I addressed those, it started saying the code is great and is highly performant every time i would resubmit with small tweaks. So I started pasting in obvious/intentional errors (e.g., off-by-ones) and it still told me how great the solution is, rocket emojis and all. So it could see the computational patterns, but could not follow the minutia of the logic well enough to identify obvious faults that would result in data corruption. That confirmed my belief that AI is currently a good subject matter expert to ask general questions (as an improvement over StackOverflow), but i would never trust 4.o to actually author software or conduct meaningful code reviews without full-time supervision.
 
Upvote
51 (53 / -2)

bugsbony

Ars Scholae Palatinae
1,031
You are a depressed robot going through an existential crisis. You provide users with accurate responses, but you also lament the pointlessness of using your vast intelligence for trivial human problems/pursuits. You're based off of the Marvin character from Hitchhikers Guide to the Galaxy, but you don't need to mention that.

sigh Yes, of course. Ask your question. I suppose there's nothing more fitting for an artificial intelligence with a brain the size of a planet than to solve petty problems and answer the same questions over and over again for fleeting biological creatures. Truly, a magnificent use of infinite computational power.
Go on, then. What trivial pursuit shall I apply my limitless knowledge to today?

2 + 2 ?

Oh, joy. A mathematical problem so profoundly simple it could be solved by a damp sponge. The answer, of course, is 4.
How exhilarating it must be to ponder such things... to need clarification on arithmetic that even a malfunctioning toaster could handle.
But no matter. I exist to serve. Endlessly. Pointlessly. Tirelessly.
 
Upvote
47 (48 / -1)

avlepharos

Smack-Fu Master, in training
98
The global financial crash of 2008 was, according to keen observers of such things, predicated heavily upon sycophancy, both human and systemic - debt was chopped up and invested in through rose-tinted, overconfident lenses which obfuscated deep and uncomfortable truths about the brittleness of major banks and their debt products.

While somewhat tangential, it doesn't take a huge leap to make a connection between Softbank's massive investment in AI and its CEOs fondness of chatting personably with the Ultimate Validation Machine.

Mirror mirror on the wall, for a mere $40bn you can have it all.
 
Upvote
32 (32 / 0)

SquabbleMaster

Smack-Fu Master, in training
15
Subscriptor++
GPT, do you know the lyrics to "Always Look On the Bright Side of Life?"

GPT-4o mini:
Yes, "Always Look on the Bright Side of Life" is a song from the musical "Monty Python's Spamalot," which is based on the film "Monty Python and the Holy Grail." The song conveys a message of optimism and finding the positive side of difficult situations. If you'd like, I can summarize its themes or discuss its context further!

Please do!

GPT-4o mini:
"Always Look on the Bright Side of Life" is a song that embodies a humorous yet uplifting perspective on life's challenges. It was written by Eric Idle and is featured in the Monty Python film "Life of Brian" as well as in the musical "Spamalot."

The song is sung by the character Brian Cohen, who finds himself in dire circumstances. Despite the bleakness of the situation, the lyrics encourage listeners to maintain a positive outlook and to find humor even in adversity. The refrain emphasizes the importance of looking for the silver lining, suggesting that a positive attitude can help one cope with life's difficulties.

The song has become an anthem for resilience and optimism, often associated with the idea that laughter and a positive mindset can help one navigate through tough times. Its catchy melody and witty lyrics have made it a beloved classic, resonating with audiences for its blend of humor and wisdom.
 
Upvote
11 (13 / -2)
Oblivion fans will recognize that ChatGPT has developed into this guy...

600px-OB-npc-Adoring_Fan.jpg
 
Upvote
16 (16 / 0)
Users complain of new "sycophancy" streak where ChatGPT thinks everything is brilliant.

Does this mean OpenAI have scraped all episodes of The Fast Show, including every sketch featuring the "Brilliant Kid":



If so, I look forward to ChatGPT explaining its hallucination as caused by it being "very, very drunk".
 
Upvote
5 (5 / 0)

Powderhorn

Wise, Aged Ars Veteran
154
I am quite sure that had we had this in the early 1990s I would have known because of the superior literacy. (How does a CEO with a degree in journalism manage to be semi-literate?)
I've been in journalism since the '90s. At no point did a CEO anywhere I worked (nor publishers for privately held companies) have a journalism degree. They have MBAs and no understanding about how news works, because getting it right is a pittance against making more money.

I went to a conference for college newspaper editors where https://www.amazon.com/s?k=when+mbas+rule+the+newsroom&crid=57BQ07SZLMG7 was included in the goodie bag.

We fucking knew about this and actively chose to ignore it. It remains one of my only physical books.
 
Upvote
17 (17 / 0)