Users complain of new "sycophancy" streak where ChatGPT thinks everything is brilliant.
See full article...
See full article...
Or... Now this may seem like a novel approach to something you don't like to use... Just not use it. And it's free!Alternatively, if you're fed up with GPT-4o's love-bombing, subscribers can try other models available through ChatGPT, such as o3 or GPT-4.5, which are less sycophantic but have other advantages and tradeoffs.
As a British person, we would prefer Marvin. There's a reason D N Adams created and sent up the Sirius Cybernetics Corporation and its robots with relentlessly upbeat Real People Personalities, and then mentions that come the revolution they were indeed the first ones to be put up against a wall and shot.Were you expecting Marvin (the Paranoid Android) perhaps?
If a question can be answered with either a simpleyesor ano, keep your answer short and specific. If you are unsure of an answer, say so and do not pretend to know everything as a matter of fact. Do not pretend to have feelings and minimize any extra prose and flowery language in your replies and just stick to answering the question.
Carro's paper suggests that obvious sycophancy significantly reduces user trust. In experiments where participants used either a standard model or one designed to be more sycophantic, "participants exposed to sycophantic behavior reported and exhibited lower levels of trust."
It is in fact yet another pseudo-scientific con at the production (not the research) level. One step up from the perpetual announcements of water-fuelled motors, one step below testing your DNA in the current state of knowledge.I genuinely wonder what they can actually do with models that remain largely black boxes other than add more background instructions to the initializing prompt to try and establish guardrails.
We know that even having the models 'reveal' their reasoning steps is hardly bulletproof and one of the whole points (both good and bad) about these things is that they're non-deterministic. There's no simple feature flag to toggle, no code to comment out.
It's also wild to me that they're asking businesses and investors to sign onto their platforms when you can have hugely impactful behavioral shifts just kinda...happen. This isn't the first time (remember the laziness thing?) and it won't be the last time that LLM 'productivity' is adversely impacted for unclear reasons.
It's like betting everything on a specific horse and rider in a race, but the horse is known to have spurious, uh...outbursts. Sometimes on the track.
I was hoping that somebody would chime in with a Douglas Adams reference. My first thought when I read the headline was that our future might really include things like elevators that sigh with satisfaction at having delivered us successfully to our floor. As usual, Douglas Adams was ahead of his time...We would positively welcome an AI that when asked a question starts off "You're not going to like it..."
Aldous Huxley beat him to it. In Brave New World there is limited automation so that the upper classes (the alphas and betas) will have things to do, managing the gammas and deltas. But the gammas and deltas are scientifically raised to be happy with their lot and enjoy their simple tasks. An elevator could be controlled by buttons, but instead a happy delta takes it up and down all day before going off to simple communal games with other deltas.I was hoping that somebody would chime in with a Douglas Adams reference. My first thought when I read the headline was that our future might really include things like elevators that sigh with satisfaction at having delivered us successfully to our floor. As usual, Douglas Adams was ahead of his time...
In all seriousness, this is more than an annoyance. Between this and the recent studies regarding mental and emotional reliance on AI, the sociological and mental health implications of AI are far more concerning to me than any disruption to the labor market.TARS: Absolute honesty isn’t always the most diplomatic nor the safest form of communication with emotional beings.
Cooper: Okay, 90 percent it is.
I am quite sure that had we had this in the early 1990s I would have known because of the superior literacy. (How does a CEO with a degree in journalism manage to be semi-literate?)On the plus side, lately I've been able to immediately tell when my boss sends me something straight from chatgpt becuase of its chipper tone and frequent use of emojis that are uncharastic of a gruff man in his late 50's.
Users complain of new "sycophancy" streak where ChatGPT thinks everything is brilliant.
I've been in journalism since the '90s. At no point did a CEO anywhere I worked (nor publishers for privately held companies) have a journalism degree. They have MBAs and no understanding about how news works, because getting it right is a pittance against making more money.I am quite sure that had we had this in the early 1990s I would have known because of the superior literacy. (How does a CEO with a degree in journalism manage to be semi-literate?)