Think bigger.I think I'm even more old man yells at cloud than I used to be, but a lot of this ai generated text seems kind of pointless. We already have human written and edited sources for a lot of things. Why do I need a script to spit out a mangled Wikipedia article for me?
Like the other day I asked a friend how a game plays, and he pasted me a barely readable summary from chatgpt. I just went to drivethrurpg and Wikipedia and read about it there.
Generating fiction might be interesting, if soulless and problematic in other ways.
"OpenAI’s Terms of Use make clear that ChatGPT is a tool that assists the user in the writing or creation of draft content and that the user owns the content they generate with ChatGPT,"
Why pay the guy whose job it is to ask the computer to spit out the content? Seems easy to automate...Think bigger.
Why pay writers when you can ask a computer to spit out a whole new book, or show, or movie, or play? It's pointless for you, but not for business.
Although almost everyone on Ars should agree with this, the general public doesn’t understand that.LLMs aren't sources of truth. Whatever they say is a fabrication by definition of how they work. Why is that so hard for people to understand, they aren't an alternative to a Google search that sends you to a page. They create text/images/videos/audio based on mathematical models, not based on the truth.
It's far from a hypothetical problem. ChatGPT once claimed I co-authored a book with John Steinbeck. While that's not terrible (in fact, my reaction was "I wish!"), it made me wonder "what if it accused me of ghostwriting Mein Kampf for Hitler?"
It's far from a hypothetical problem. ChatGPT once claimed I co-authored a book with John Steinbeck. While that's not terrible (in fact, my reaction was "I wish!"), it made me wonder "what if it accused me of ghostwriting Mein Kampf for Hitler?"
Then there’s a simple solution. Add the same disclaimer as fictional television shows and movies. The output of this system is meant for entertainment purposes only. Any similarity to real individuals, places, or events is coincidental.Although almost everyone on Ars should agree with this, the general public doesn’t understand that.
That’s where the real problem lies. Not informed, tech-literate people using it, but the general public which doesn’t understand how most technology actually works. If we want to fix the real problem, we should invest in education.
Unfortunately, given the extreme and deliberate degradation of Google search they are often superior in getting answers. I hate that Google search has become so utterly useless that I often have to turn to an LLM (I refuse to call them 'AI', because they're not) - but what other choice do I have? Search is genuinely incapable of providing useful information in 2023.LLMs aren't sources of truth. Whatever they say is a fabrication by definition of how they work. Why is that so hard for people to understand, they aren't an alternative to a Google search that sends you to a page. They create text/images/videos/audio based on mathematical models, not based on the truth.
Sounds to me like Battle's complaints are showing up in regular search results as the blurb about him at the top.LLMs aren't sources of truth. Whatever they say is a fabrication by definition of how they work. Why is that so hard for people to understand, they aren't an alternative to a Google search that sends you to a page. They create text/images/videos/audio based on mathematical models, not based on the truth.
We've moved past training AI to identify bicycles and pedestrians for autonomous driving under the guise of security, to asking us to curate AI's content base under the guise of "suggested content".When users sign up to use a chatbot, disclaimers warn them to fact check chatbot responses
Eliza isn’t AI. Markov chains aren’t AI. Master systems aren’t AI. OCR isn’t AI. NN speech identification isn’t AI. NN image identification isn’t AI. LLMs aren’t AI. One of these days, we’re going to figure out what intelligence actually is through process of elimination. (Edit: in case it wasn't clear, the preceding was an attempt at mockery, and not the belief of this poster)I hate that Google search has become so utterly useless that I often have to turn to an LLM (I refuse to call them 'AI', because they're not)
Soon, almost all people like you (and me) will be filtered out because "not efficient enough". Too slow , not enough output, overthinking "content"...I think I'm even more old man yells at cloud than I used to be, but a lot of this ai generated text seems kind of pointless. We already have human written and edited sources for a lot of things. Why do I need a script to spit out a mangled Wikipedia article for me?
Like the other day I asked a friend how a game plays, and he pasted me a barely readable summary from chatgpt. I just went to drivethrurpg and Wikipedia and read about it there.
Generating fiction might be interesting, if soulless and problematic in other ways.
Artificial Intelligence is the entire field, a umbrella term from everything from Deep Learning to Regressions. What you are thinking of is "Machine Learning" which is effectively the use of complex mathematics to emulate a decision making process, but incapable of learning on its own. That's what nearly everything is right now with the few exceptions "lesser" models.Eliza isn’t AI. Markov chains aren’t AI. Master systems aren’t AI. OCR isn’t AI. NN speech identification isn’t AI. NN image identification isn’t AI. LLMs aren’t AI. One of these days, we’re going to figure out what intelligence actually is through process of elimination.
There’s a reason we coined the term “general intelligence” a few decades ago.
Like 4chan? I can sympathize. I mean if I was looking to wipe out Trump supporters at scale I would go with a language model tuned on custom misinformation.An "AI" assistant on a home server that is your responsibility sounds great to me. Far better than the things from Google, Apple, and Amazon. Let me train it on the sources that I choose
No. I’m mocking the need of some to continually redefine “intelligence” as always something beyond what we’ve currently developed. They won’t be satisfied until it’s on par with ourselves.Artificial Intelligence is the entire field, an umbrella term from everything from Deep Learning to Regressions. What you are thinking of is "Machine Learning" which is effectively the use of complex mathematics to emulate a decision making process, but incapable of learning on its own. That's what nearly everything is right now with the few exceptions "lesser" models.
Sure, but accelerating the proliferation of false information is definitely a bad thing that LLM is doing right now. At least with traditional search you can rely on reputable sources like Wikipedia (major caveat here since I remember an article about Wikipedia citations that don't actually support claims made in articles.)Don't need AI to make up things. People felt the same about others reading, the printing press, free speech, the internet, and thousands of other things.
An "AI" assistant on a home server that is your responsibility sounds great to me. Far better than the things from Google, Apple, and Amazon. Let me train it on the sources that I choose and an easy enough way to manage data stored locally.
If I have it change all the zombies in World War Z to Teletubbies then that is for personal use. Can't legally profit from it just like everything already. Or like CNN would be made fun of, again probably, of using doctored pictures/video. It doesn't matter where it came from without verification and even then you still want multiple sources for extraordinary claims.
Sounds to me like Battle's complaints are showing up in regular search results as the blurb about him at the top.