Skip to content
social media

Reddit will require “fishy” accounts to verify they are run by a human

AI-generated content is still acceptable for now.

Scharon Harding | 58
Chatbot icon on the digital futuristic blue wavy background. 3d Illustration with bright colors and pixelated technology.
Credit: Getty
Credit: Getty
Story text

Reddit will require accounts that exhibit “automated or otherwise fishy behavior” to verify that a human runs them, Reddit CEO Steve Huffman said in a Reddit post today. The verification process aims to combat unwanted bots from flooding Reddit at a time when AI bots are poised to take over the Internet.

“As AI becomes a bigger part of the Internet, we want to make sure that when you’re on Reddit, you know when you’re talking to a person and when you’re not,” Huffman said.

Human verification will only occur if Reddit suspects that an account is a bot. This is “rare” and won’t apply to “most users,” Huffman emphasized. If the account cannot prove that it’s human, it “may be restricted,” he said.

Reddit will check if an account is run by a human by using third-party tools that Huffman said won’t expose users’ true identity, Reddit username, or Reddit activity. Current methods that Reddit is exploring include passkeys, which Huffman said are a great starting point but don’t provide any “proof of individuality or anything other than ‘a human probably did something.’”

Reddit is also looking into third-party biometric services, like World ID, which uses iris-scanning tech.

“I think the Internet needs verification solutions like this, where your account information, usage data, and identity never mix,” Huffman said.

A last resort may be third-party government ID services, which Reddit is already required to use in some geographies, like the UK. Huffman said this is “the least secure, least private, and least preferred” method for human verification on Reddit.

“When we are forced to do this, we design the integrations so that we never actually see your ID information, so your Reddit data cannot be tied to you,” he added.

Additionally, Huffman announced that accounts that use bots in permitted ways will get an App label. Reddit has posted information about how developers can get their apps labeled.

An example of the label when viewing Reddit on desktop.
An example of what the App label will look like when viewing Reddit on desktop.
An example of what the App label will look like when viewing Reddit on desktop. Credit: Reddit

The announcement comes amid concern from some industry commentators that AI bot traffic online could surpass human traffic soon. Web agents are becoming more prevalent and flocking to social media sites. A relaunched Digg, for example, shut down its open beta after three months due to an “unprecedent bot problem” led by “sophisticated AI agents and automated accounts,” CEO Justin Mezzell said in March.

Ensuring that Reddit isn’t overtaken by bots is in Reddit’s best interest financially. It positions itself to users as a place to have conversations with real people about human topics and points of interest. The social media platform has also been increasingly selling itself to advertisers as a way to push products to real people. And Reddit has made millions by allowing AI companies to train large language models on its years’ worth of human-generated content. Reddit has sued and blocked companies that it believes has wrongfully scraped content without paying.

Reddit already removes an average of 100,000 accounts per day that use nefarious bots and post spam, per Huffman, who said that the removals often happen before users see the accounts. Reddit also plans to make it easier for Reddit users to report accounts that they think are bots.

AI-generated content still allowed

Reddit is exploring ways to limit bots on the platform but restraining from going after humans who employ chatbots to create posts and comments. Reddit hasn’t confirmed how much content on the site is AI-generated, but battling AI slop on Reddit has proven challenging for moderators, even when subreddits ban the use of generative AI.

“We’ll monitor its usage and see what happens as we crack down even more on automated accounts. As always, communities can set their own standards if they want,” Huffman said of AI-generated content.

Disclosure: Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

Photo of Scharon Harding
Scharon Harding Senior Technology Reporter
Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.
58 Comments