New "computational Turing test" reportedly catches AI pretending to be human with 80% accuracy.
See full article...
See full article...
We asked a computer if another computer sounded authentic, and the answer was a resounding yes!Instead of relying on subjective human judgment about whether text sounds authentic, the framework uses automated classifiers and linguistic analysis to identify specific features that distinguish machine-generated from human-authored content.
Time for Microsoft to dust off that old Tay code.When prompted to generate replies to real social media posts from actual users, the AI models struggled to match the level of casual negativity and spontaneous emotional expression common in human social media posts, with toxicity scores consistently lower than authentic human replies across all three platforms.
What about spouting absolutely bullshit with perfect confidence, never using one word when an inaccuracy followed by a dozen rationalizations will do? The great similarity between AI driven chat and a human being is the almost complete inability of either to tell the truth. The most significant characteristic of chat AI is its perfect resemblance to a dirty politician.…actual humans on social media keep proving that authenticity often means being messy, contradictory, and occasionally unpleasant.
Providing actual examples of a user’s past posts or retrieving relevant context consistently made AI text harder to distinguish from human writing, while sophisticated approaches like giving the AI a description of the user’s personality and fine-tuning the model produced negligible or adverse effects on realism.
Yeah, what's it to you?So... the one special talent for which humans retain the upper hand is... being jerks?
While researchers keep trying to make AI models sound more human, actual humans on social media keep proving that authenticity often means being messy, contradictory, and occasionally unpleasant. This doesn’t mean that an AI model can’t potentially simulate that output, only that it’s much more difficult than researchers expected.
AI models struggled to match the level of casual negativity and spontaneous emotional expression common in human social media posts
The study also revealed an unexpected finding: instruction-tuned models, which undergo additional training to follow user instructions and behave helpfully, actually perform worse at mimicking humans than their base counterparts.
I will ensure I do the needful, many thanks.The next time you encounter an unusually polite reply on social media, you might want to check twice. It could be an AI model trying (and failing) to blend in with the crowd.
Wait until they start launching campaigns to claim that Gore won in Florida, we never landed on or went to the Moon, JFK was shot by Russian agents, the Earth is flat, the Vegas Mandalay Bay shooting was a false flag, storm systems Katrina and Harvey were caused by HAARP and whatever else triggers rage responses.give it time, they'll make chatbots start sounding like jerks online. those social media sites need it else they'll lose their precious engagement.
Or whatever all the troll bots on Reddit are using. I'm wondering what bots they ran into. Karma bots?They didn't test Grok.
Likely OllamaOr whatever all the troll bots on Reddit are using. I'm wondering what bots they ran into. Karma bots?