On Monday, a brand-new Reddit account popped up on the widely read forum r/AmItheAsshole, where users have their personal disputes arbitrated by strangers. This particular user asked if they had crossed a line by “refusing to babysit my stepmother’s kids because I have my own job and responsibilities.” The post itself was succinct, straightforward, and grammatically clean, explaining a situation in which the person’s stepmother and father often expected them to provide childcare on little notice, eventually leading to an argument.
“Now there’s tension at home, and I’m starting to wonder if I handled it the wrong way,” the redditor concluded. “I do understand that raising kids is stressful, but I also feel like I shouldn’t be obligated to take on that responsibility when it’s not my role.” The responses to this individual were largely supportive: The kids were not theirs to look after, many people replied, and moving out of the house would be the best course of action.
But according to AI detection software developed by Pangram Labs—which claims an accuracy rate of 99.98 percent and a false positive rate of just one in 10,000—the original story of family discord was AI-generated.
I saw it flagged as AI content while scrolling the page thanks to the latest version of Pangram’s Chrome extension, which rolls out to the public this week; at the paid tier of $20 per month, the tool scans posts on social sites including Reddit, X, LinkedIn, Medium, and Substack in real time, labeling them as human-written, AI-generated, or drafted with assistance from AI. The analysis also includes a measure of Pangram’s confidence in the conclusion: low, medium, or high.
Researchers have found AI slop everywhere online. It undermines journalism and social platforms alike. Text generated at least in part by AI accounts for more than a third of all new websites as of 2025, according to a study published this month by researchers at Stanford University, the Imperial College of London, and the Internet Archive. (The researchers used earlier Pangram tools to arrive at their findings.)
It’s this mess that Max Spero, CEO of Pangram and a self-professed “slop janitor,” wants to help clean up. He tells WIRED that adding instant analysis to the company’s browser extension offers people a more seamless way of checking for AI content across the sites they frequent.
“By providing proactive checks, it can be a lot more useful to people who just generally care about not seeing slop,” Spero explains. “It's a big lift to go paste some text into an external tool. People just aren't going to do that.”
Of course, made-up scenarios are nothing out of the ordinary on subreddits like r/AmItheAsshole, where trolls have been known to post engagement bait consisting of especially absurd fictions. Yet even a discerning reader may not suspect a relatively unremarkable narrative like the one described above to potentially be fake. (The redditor who shared it did not respond to a request for comment regarding whether they had used AI or what they hoped to achieve with the post, which they later deleted.)
While no AI detection system is perfect, Pangram’s is regarded as the most consistent and accurate by third-party researchers at several universities; a 2025 University of Chicago study auditing AI detection software gave Pangram its highest rating and noted that its false positive rate was nearly zero, especially on longer passages. Spero says that one reason it outperforms competitors is that it’s trained in part on “harder examples that are closer to the boundary between AI and human.” I was unable to make it generate a false positive when testing it on articles published in WIRED.
Pangram’s souped-up Chrome tool quickly reveals how much of what you encounter on a daily basis across the internet is likely AI-generated.
One particularly surprising example: Pangram’s tool suggests the Pope’s official X account seems to be rife with AI text, even in threads where the pontiff discusses the danger that AI poses to the sacred human spirit.
On April 17, the @Pontifex account shared a message that began with the claim that Catholics “can form pioneers of a new humanism in the context of the digital revolution.” Pangram’s browser extension labels this post as human-written. The three posts that follow, however—describing how artificial intelligence shapes mentality and social structures—get flagged by the extension as AI-generated. “When simulation becomes the norm, it weakens the human capacity for discernment,” the final AI-tagged post warns.
Other X posts from Pope Leo XIV, including thoughts on the ongoing wars in Ukraine and the Middle East as well as a call for “a more equitable distribution of wealth,” were hits for the AI detector as well.
“Clearly he doesn't run his Twitter account himself,” Spero says of the Pope. “They have a social media person. But it's also obvious that they use at least some degree of AI.” The Vatican did not return a request for comment on the matter.
X users suspicious of lengthy posts from blue-check influencers may feel validated to learn that Pangram frequently identifies these, too, as AI-written. Similarly, there’s no shortage of AI slop on Medium or LinkedIn. And even a cursory glance at some of Substack’s top-trending authors turns up plenty of AI-flagged posts.
While some writers would scoff at using AI, many authors are loud and proud about their use of AI-assisted tools. Tech reporter Alex Heath uses Claude Cowork to help him write his articles, which he publishes on Substack. He has even trained Claude to write like him by giving it specific instructions to match his style and voice.
In some ways, the content of social feeds are just the tip of the iceberg. Using the original function of the Pangram extension, which allows you to highlight any text online to conduct a manual AI check, you’ll find that a far wider range of writing—including, say, the message from outgoing Apple CEO Tim Cook on the occasion of the company’s 50th anniversary on April 1—gets tagged as positive. (Apple did not immediately respond to a request for comment.)
Still, ongoing real-time detection across a handful of popular platforms has the potential to change how people passively absorb information from their screens. Not only will it alert them to specific accounts that pump out AI-generated junk, it will constantly remind them of just how much of that stuff is out in the ether. That, in turn, could make readers more discerning and skeptical—qualities it never hurts to have in an age of deceptive artifice.
