A tool can be both useful and hard to use well. LLMs can be extremely useful in the right context and the right problem. Much like many other tools, the genuinely hard part is knowing how and when to use a tool to best effect.I encountered this perplexing trend a while back, the answer is simple, posts like yours sound like the author thinks LLMs are good/useful, but spend the entire comment talking about their flaws and how they require absurd gymnastics to get any value out of them.
I don't know where you've been shopping. If you want to propose it is contextual, we might have a discussion. You'd also have to be prepared to address the matter of how these models are being marketed--both to investors and to the public--as opposed to what they actually can do well.I thought "Truth" was supposed to be subjective—at least, that’s the line we've been sold.
A tool can be both useful and hard to use well. LLMs can be extremely useful in the right context and the right problem. Much like many other tools, the genuinely hard part is knowing how and when to use a tool to best effect.
IMO knowing and understanding the constraints and flaws of these tools is essential to being able to effectively use them. And using these tools is extremely informative to their nuances and actual capabilities.
It's frustrating that so many commentators are extremely authoritative about the big picture risks and downsides of LLMs, but discussion of the experience of using LLM tooling is punished by the community.
That's the entire point of why these posts are being "punished". These tools are advertised as easy to use and effectively turnkey. The amount of understanding, skill, checking, and time required to get "useful" outputs could just as easily have been spent on compiling the output yourself.IMO knowing and understanding the constraints and flaws of these tools is essential to being able to effectively use them. And using these tools is extremely informative to their nuances and actual capabilities.
Lately I've been thinking about the difference between being knowledgable and being wise. I think that a big part of wisdom is knowing where your knowledge runs out; in other works, wisdom is knowing when to say "I don't know, let's find out." LLMs are a lot of things, but I can't say I've ever thought one was wise.LLM's are also terrible at not knowing what they don't know. They have a serious drug problem of using shrooms and hallucinating garbage.
It's on purpose. It's to make executives feel good about themselves and make them feel smart.
Executives who think they're geniuses that deliver the real value, not the workers.
In other words, potential customers of AI tools for the enterprise.
A friend tried using ChatGPT as a therapist/spiritual guide. At first she was impressed with it but that didn't last long. Pretty quickly she found it to be creepy and weird and she stopped her experiment.
That doesn't make sense to me. The advertising is misleading, so honest discussions of the actual experience of using the tools is undesirable?That's the entire point of why these posts are being "punished". These tools are advertised as easy to use and effectively turnkey. The amount of understanding, skill, checking, and time required to get "useful" outputs could just as easily have been spent on compiling the output yourself.
There is no persona behind an AI.When reading articles like this I always think about the movie Rainman. The LLM is an autistic Dustin Hoffman that knows basically everything but cannot express it so that most people can understand and in between you have Tom Cruise as a social people pleaser. It seems they added a bit too much Tom Cruise in this case.
In 1973, Lloyd Kahn, an early proponent of self-built domes and author of Domebook and Domebook 2, published a fascinating essay called Smart But Not Wise that is still broadly pertinent today. Definitely worth a read.Lately I've been thinking about the difference between being knowledgable and being wise. I think that a big part of wisdom is knowing where your knowledge runs out; in other works, wisdom is knowing when to say "I don't know, let's find out." LLMs are a lot of things, but I can't say I've ever thought one was wise.
Forgive me, I thought we were both participating in good faith. Turns out this conversation is a race to the bottom of bad faith interpretations of one another's comments. Anyways:That doesn't make sense to me. The advertising is misleading, so honest discussions of the actual experience of using the tools is undesirable?
Thanks for suggesting this, it was a good read! I love when I read something like this that's so far outside of what I'm used to, I'll have to reflect on it, but I like it.In 1973, Lloyd Kahn, an early proponent of self-built domes and author of Domebook and Domebook 2, published a fascinating essay called Smart But Not Wise that is still broadly pertinent today. Definitely worth a read.
Forgive me, I'm not going to read back through all of this argument, but it looks like you're being criticized, not personally attacked.Forgive me, I thought we were both participating in good faith. Turns out this conversation is a race to the bottom of bad faith interpretations of one another's comments. Anyways:
I take your personal attack seriously, and feel compelled to respond in kind:
Your mother was a hamster, and your father smelt of elderberries.
There, we are at the bottom and I won the race there, have a good day.
Now, if you're having trouble understanding why I'm responding the way I am, may I suggest you feed our conversation into an AI and ask it where you went wrong.
You're right, I was not personally attacked, I just made a deliberately bad faith interpretation of the comment I was replying to in order to mock the bad faith interpretation in said comment.Forgive me, I'm not going to read back through all of this argument, but it looks like you're being criticized, not personally attacked.
From my perspective, I am engaged in a good faith discussion. I'm sincerely trying to understand others' perspectives.Forgive me, I thought we were both participating in good faith. Turns out this conversation is a race to the bottom of bad faith interpretations of one another's comments
Now, if you're having trouble understanding why I'm responding the way I am, may I suggest you feed our conversation into an AI and ask it where you went wrong.
Looking at this discussion, I can identify several behavioral patterns:
User A's Behavior:
Positive aspects:
Problematic aspects:
- Attempts to engage substantively with the topic of LLM limitations and utility
- Tries to understand why their comments receive negative reactions
- Makes reasonable points about tools having both benefits and drawbacks
- Attempts to clarify misunderstandings
- Uses crude language in their initial comment which may set a negative tone
- The phrasing "That doesn't make sense to me" could be interpreted as dismissive (though it appears genuine)
User B's Behavior:
Positive aspects:
Problematic aspects:
- Initially provides a thoughtful explanation for community reactions
- Makes valid points about the gap between LLM marketing and reality
- Dramatically escalates the conversation by suddenly accusing A of bad faith
- Misinterprets what appears to be a genuine clarifying question as an attack
- Resorts to mockery and declares "victory" in an imagined competition
- The sarcastic suggestion to use AI contradicts their own argument about LLMs being difficult to use effectively
Key Breakdown Point:
The conversation derails when User A asks: "The advertising is misleading, so honest discussions of the actual experience of using the tools is undesirable?"
This appears to be a genuine attempt to understand B's logic, but B interprets it as bad faith argumentation. This suggests B may have been primed for conflict or had previous negative experiences in similar discussions.
Overall Assessment:
User A maintains relatively consistent good-faith engagement despite some rough edges, while User B starts constructively but then catastrophically misreads the situation and becomes hostile. The irony is that they seem to agree on the core issue (LLMs are oversold), but the discussion collapses due to B's defensive overreaction to a clarifying question.
That's not what I asked for. I asked you to ask it where you went wrong in order to find out why I responded the way I did. If you were "engaged in a good faith discussion" you wouldn't have prompted the AI the way you did and claim that you're doing what I suggested.From my perspective, I am engaged in a good faith discussion. I'm trying to understand others' perspectives.
Since you suggested it, here's Claude Opus 4's analysis of this discussion. (the prompt was our interactions plus "Please evaluate the behavior of users in this discussion:")
That's not what I asked for. I asked you to ask it where you went wrong in order to find out why I responded the way I did. If you were "engaged in a good faith discussion" you wouldn't have prompted the AI the way you did and claim that you're doing what I suggested.
That's not what I asked for. I asked you to ask it where you went wrong in order to find out why I responded the way I did. If you were "engaged in a good faith discussion" you wouldn't have prompted the AI the way you did and claim that you're doing what I suggested.
Looking at this discussion, I can identify a few areas where the conversation went sideways:
Initial tone issues:
Communication breakdown: The core issue seems to be a mismatch in what you and Person B were actually discussing:
- Your opening crude joke likely set a negative tone that colored how people read the rest of your comment
- The edit asking why your comment was unpopular can come across as defensive rather than genuinely curious
The escalation point: Your response "That doesn't make sense to me..." was likely read as sarcastic or deliberately obtuse, even if you meant it genuinely. Person B seems to have interpreted this as bad faith engagement.
- You were trying to have a nuanced conversation about LLMs being useful despite their flaws
- Person B interpreted your comments as part of a pattern they find frustrating (people who defend LLMs while only discussing problems)
What might have worked better:
The irony is that you were actually making valid points about the importance of understanding tool limitations, but the delivery and escalating tensions obscured the substance of your argument.
- Start with a more professional tone to establish credibility
- When Person B explained the community's perspective, acknowledge their point before presenting your counterargument
- Instead of "That doesn't make sense to me," try something like "I see your point about the advertising, but I'm genuinely curious - wouldn't that make honest discussions more valuable, not less?"
Ok, as someone who has, there is already a distinction for this: physical dependence and psychological dependence.Ah, is there a word in the English language more bandied about and abused than "addiction"? Take a course in psychopharmacology and then tell me chatbots are "addictive".
You know, somehow I don't think the tech companies are falling all over themselves to see who can win the "No Users Addicted - CHALLENGE".The challenge that tech companies face is making AI chatbots and assistants helpful and friendly, while not being annoying or addictive.
Okay, I'll spell it out since clearly I did not express myself in a way that is readily understood:Sure, here's what you asked for, literally: (Prompted as "Where did I go wrong in this discussion?" followed by our discussion)
So let's defer to the LLMs average of social graces, which are greater than mine:
I understand and agree with your points about advertising of LLMs, it's massively misleading and overselling. But shouldn't that make actual discussions of the material challenges and reality of using these tools more valuable?
I said:IMO knowing and understanding the constraints and flaws of these tools is essential to being able to effectively use them. And using these tools is extremely informative to their nuances and actual capabilities.
Emphasis added.That's the entire point of why these posts are being "punished". The amount of understanding, skill, checking, and time required to get "useful" outputs could just as easily have been spent on compiling the output yourself.
That doesn't make sense to me. The advertising is misleading, so honest discussions of the actual experience of using the tools is undesirable?
Yes, it's the one where Susan Calvin (the robotic psychiatrist) drives a unique robot that can read minds—a sort of manufacturing mutation, an accident—into a state of inoperability, burning out its positronic brain, by putting it into a position where all of its choices requires breaking the Three Laws of Robotics.Is that the plot of a short Robot story by Isaac Asimov? That a robot is so desperate to be useful it just tells everyone what it thinks they want to hear? Am I remembering correctly?
No, this is a known problem with AI models: the problem of negation. Part of it is due to just how multifaceted and contextual our use of negation in language is: human beings don't use "not", "no," or "none" as simple binaries.This has nothing to do with AI; you'd expect similar results from humans. It's inherent to the problem described: it's much easier to find an instance of a feature (i.e. TB infection) than it is to reliably conclude the absence of a feature. The odd/suspicious result would have been if the AI had performed the same on both questions.
Strangely, I use them for hashing out plots and ideas in stories I write.It’s depressing how many people on r/chatGPT think using LLMs as therapists is totally fine.
"Carrot Weather", a weather app that offers chatbot comments related to weather, included settings for tone like "homicidal maniac".I can't wait for the swing the other way to get a "Chandler" sarcastic AI.
Such a hilariously—and chillingly—apt phrasing.It's what humans have been striving for ever since the first king: ontologically loyal slaves.
Bless her (semi) innocent soul. I can't help but wonder why otherwise perfectly smart people just assume privacy with services.Or that it's private. It was a bit upsetting to see the look of panic on a friend's face when I told her that all Chatbot chats were logged and mined for data and she realized that not only may actual humans be reading her most personal thoughts but depending on what she's been telling it she may also be feeding into the next Harlequin Botmance or OnlyBots product.
Hahaha, for all you boys out there, E.L.F. is a cosmetics brand.This right here is the key. Companies could care less how sycophantic a model gets or how many people complain about it until it impacts the potential revenue stream.
Big potential difference between "Omigawd, girl, you look amazing no matter what you do, don't worry about whether you're pretty or not, like, you're totally amazing just as you are, ch'ya, for sure!" and "No, you're fine, but maybe your lips could use some hydrating at the core, then you'd look cute, like an e.l.f.! Hey! Want me to do a search on ways to do that?"
Haha, this is the classic Arch (Linux distro) help question in disguised form, isn't it?In more than one way. The Dunning Kruger effect is widely misunderstood. People think it's "why stupid people think they're smart" but it's really something that affects all humans, all the time, even if you are aware of it.
It's basically the phenomenon of, you don't know what you don't know, so you can easily overestimate what you know. Sure, being modest is a guardrail, but it's more like a 4" guardrail you easily forget about and frequently step right over without realizing.
Here's a concrete example. I found a recent (2023) algorithm from a SIGGRAPH paper that I figured would be perfect for my project and give me fantastic results over older options. I tried to use various LLMs to implement most of it for me. I figured hey, I've attached the paper itself as well as the git repo of the Python reference implementation from the researchers. I figured transliterating from one language to another would be something it can do.
As you might expect, every model completely failed to produce anything close to working, even with my intervention. Its "debugging" was beyond useless.
This I anticipated. I knew this was a potential outcome. You know what I DIDN'T anticipate? That I skimmed the paper too quickly, that I didn't realize that, while this algorithm could technically work, it is meant to handle shapes that were produced by specialized neural networks, not traditionally defined ones.
I didn't need the paper at all. I didn't need the new algorithm. In fact the more traditional one was far better for my approach. I jumped to the conclusion that this fancy algorithm was beneficial when it wasn't. I figured I had enough graphics programming knowledge to jump right in. I didn't.
I wasted two weeks of my time futzing around with this before realizing what I was doing. This has not happened with all my LLM usage, only some.
I tried to use AI to punch above my weight class. It didn't work, not really. There is no cheat code for knowledge and learning.
Well, except for being filthy rich, I guess
Fair enough. I skipped a few steps in my reasoning and explanation. I interpreted you argument as:Okay, I'll spell it out since clearly I did not express myself in a way that is readily understood:
The portion of what you said that I directly quoted:
I said: The amount of understanding, skill, checking, and time required to get "useful" outputs could just as easily have been spent on compiling the output yourself.
My comment contained "honest [discussion] of the actual experience of using the tools". And explained that the actual experience is one of using something that doesn't accomplish anything that couldn't be accomplished without using it.
I have no idea how you interpreted what I said the way you did, it is so far beyond what makes sense to me that I interpreted it as either deliberately misinterpreting what I said or not paying any attention to what I said, both would be the result of bad faith participation.
I think this assertion is not universally correct and is growing much less correct with time and must be frequently re-evaluated against reality. In my experience the time necessary to get useful outputs from these tools is less than it would take me to perform the same task without LLM tooling. This has also pretty rapidly improved in the past 2 years.The amount of understanding, skill, checking, and time required to get "useful" outputs could just as easily have been spent on compiling the output yourself.
I don't know where you've been shopping. If you want to propose it is contextual, we might have a discussion. You'd also have to be prepared to address the matter of how these models are being marketed--both to investors and to the public--as opposed to what they actually can do well.
No, it has very much to do with AI and the very different way LLMs process language compared with human linguistic processing. They're entirely unalike, and LLMs have deep, inherent troubles handling some language constructs that humans do not.This has nothing to do with AI; you'd expect similar results from humans. It's inherent to the problem described: it's much easier to find an instance of a feature (i.e. TB infection) than it is to reliably conclude the absence of a feature. The odd/suspicious result would have been if the AI had performed the same on both questions.
Sadly, I knew that, because NYT crosswords...Hahaha, for all you boys out there, E.L.F. is a cosmetics brand.
Industry insiders also warn that AI companies have perverse incentives