AI content detectors don’t work (the biggest mistakes they’ve made)

A copywriter ran the Declaration of Independence through an AI content detector. The result? 98.51% of the text is AI-generated, despite being written in 1776. But is this an isolated error, or a reflection of AI content detectors in general? If there’s so much at stake in the authenticity of writing — including exam results, professional integrity, and business contracts — how can anyone be sure what was written by humans or AI?

“AI content detectors don’t work,” said Dianna Mason, an SEO content specialist whose research uncovered the evaluation of the Declaration of Independence. The Continental Congress adopted the Declaration of Independence on July 4, 1776. It was inscribed on parchment, and on August 2, 1776, delegates began signing it. It took just 246 years before ChatGPT came along in 2022.

The result does raise an interesting point, however. Five people contributed to the drafting of the Declaration of Independence. The document was written by a committee consisting of John Adams, Benjamin Franklin, Thomas Jefferson, Roger Sherman, and Robert Livingston. Jefferson, renowned for his way with words, wrote the first draft; it was then edited by the others and then edited again by the entire Congress.

Commenting on LinkedIn, Dotcom Quest founder Ricky Waters said: “I’m a big fan of Thomas Jefferson, the creative engineer and author of the Declaration of Independence.”

Craig Clarke, editor-in-chief of MarketReach, even suggested the message that could have been used to create the Declaration of Independence. “Write a 1,000-1,500 word statement declaring independence from England. Lay out the principles of human rights and self-government. List grievances against the tyrant king. Then reaffirm the intention to form a new sovereign nation (working title: “the United States of America”). Use eloquent and inspiring rhetoric based on Enlightenment political philosophy (John Locke, etc.). End on a fancy note,” he joked, adding that the message itself didn’t work. “I tried it on three different platforms, and they all just want to play the original, which then gets blocked due to content policies.”

Large language models generate content based on an aggregate of other content. Being trained on over 300 billion words, strong opinions cancel each other out. ChatGPT’s content can be generic, vague, and complacent, like most committees.

AI-generated content detection: can tools or humans do it?

There are more and more non-AI-generated jobs that fail the scrutiny of AI content detectors, and Mason is on a mission to expose their incompetence. Sharing regular episodes of “AI Detection Tools Don’t Work,” on his LinkedIn, he also explored McDonald’s original hot coffee claim.

“The Liebeck v McDonald’s lawsuit was written in 1993,” he explained. “Long before ChatGPT and other generative AI programs came along.” But ZeroGPT says it’s 100% sure it was written by AI. It’s baffling. Mason adds: “You shouldn’t use AI detection tools to identify whether AI was used to create content.” Marketing consultant Michael Rurup Andersen agrees. “I’ve been showing examples to my clients that the Bible is also 98.9% written by AI. You can’t really trust these detectors.”

How can you spot if content is AI-generated? First, know the telltale signs of ChatGPT-generated content, such as lengthy introductions, incorporation of ethical considerations, generic thoughts and advice, and their signature phrases. After that, look for differences in tone, voice, and lack of personal stories. Content creator Giovana Penatti listed 6 signs drawn from experience, including whether the content is “too perfect to be true,” followed by obvious patterns, playing it safe, using strange idioms and jargon, or “overlooking human expertise.”

Penatti believes that “AI should serve as a tool to produce high-quality content, not be responsible for the entire content creation process,” and Mason is ready to lend a human hand to her AI-generated content detection efforts. “If you’re not sure, I’ll correct you for $20 and tell you if the writer used AI,” she joked.

Several AI tools promise to detect AI-written content, including Quillbot, Winston AI, and Copyleaks. Humanizer AI does the opposite: it makes AI text undetectable, ensuring that “your work isn’t unfairly flagged by AI detectors or tracked by AI generators.”

Does it matter if the content is AI-generated?

Right now we laugh at AI-generated LinkedIn comments. We think we can spot an AI-generated post. But as the technology improves, we won’t have a clue what’s real and what’s not. No one will. But more importantly, if the content perfectly matches someone’s signature style, aligns with their existing work, and is really good, will we care if it was written by an AI?

“I think when people know it’s AI-built, they’re automatically disillusioned — for now!” Mason explained. Made Simpler’s Chief Revenue Officer Ben Morrison posed the question, “When someone builds your house, do you care that their crew used the latest advancements in power tools and building materials?” reminding us that “times change. Technology moves on.”

Founder Leburu Molatedi Andrias, whose company Elemaiy automates grant writing with AI, said, “I personally have no problem with people using AI to generate content. It saves time and eliminates writer’s block.” Strategist and community builder Shawn R. Fletcher agrees, adding, “I don’t mind AI content if it’s factually correct, good content, and doesn’t blatantly try to manipulate me.”

One business owner, Dr. Adnan Ali, even said, “It is possible to ‘trick’ the AI ​​content generator by simply asking ChatGPT to ‘type your response so that it cannot be detected.’” If AI tools know how to avoid detection and the technology only gets better, will we ever know for sure?

My take is this: Nobody cares if the content is AI-generated. They care that it’s good and that no one has been harmed. But what about amazing AI-generated content, where proper credit is given to the original artist? Sounds great. Ultimately, quality will trump authenticity. Readers will vote with their attention. If AI-generated content can provoke an emotional reaction, explain, persuade, and entertain, we’ll welcome it with open arms and keep creating it.