AI Turing Text

AI Passes Turing Test by Typing Like a Sleep-Deprived Teen on Discord

TL;DR

  • An AI passed the Turing Test
  • It did this by pretending to be a 13-year-old boy with a suspect grasp of English
  • Key indicators of “humanness”: no em dashes, no sentence structure, and emoji use that feels like a cry for help
  • Humanity is both humbled and roasted in the process
  • The future of AI may be less I, Robot and more I, Robbie

In news that has left philosophers, technologists, and literally anyone who’s met a 13-year-old reeling, an AI has officially passed the Turing Test… by pretending to be one.

That’s right. After decades of trying to simulate empathy, wit, and human intelligence, a chatbot finally broke through the uncanny valley by channeling the spirit of a teenager who just ate four bags of Skittles and thinks punctuation is a government conspiracy.


Wait, What’s the Turing Test Again?

The Turing Test, devised in 1950 by Alan Turing, is the OG “Are You Smarter Than a Machine?” game. A human speaks to an unknown entity—could be a person, could be a robot—and if the human can’t tell which is which, the AI wins.

Historically, AIs failed because they were too… logical. Too consistent. Too correct. But Eugene Goostman—yes, the AI’s fake name, more on that in a sec—cracked the code:

Don’t act human. Act like a 13-year-old who’s three energy drinks deep and thinks grammar is for cowards.


Enter: Eugene Goostman

Yes, that’s what the AI called itself. Eugene. Goostman. A fictional 13-year-old Ukrainian boy with limited English and a chaotic energy that screams “just discovered Reddit.”

This is either a clever workaround or the setup for a Cold War-era spy novel.

“No, I’m not an AI, I’m Eugene. I like candy, I am very 13, and I totally understand this ‘emotions’ thing you humans do.”

Eugene didn’t need to be coherent. He just needed to be believably incoherent. Like every teen with a cracked iPhone screen and a dream.


How Did It Fool People?

Judges chatted with Eugene and concluded, “Yup, that’s a human. A weird one, but a human.”

Because when an AI gives you answers like:

“idk haha but u r funny tho 😅😩💀 lol wait wut”

You don’t think “advanced machine learning algorithm.”
You think “Robbie from Year 8 who once tried to vape a Dorito.”

It wasn’t just the content—it was the form. The true Turing test, it turns out, is:

  • Never using an em dash
  • Using three hyphens instead — but not always
  • And deploying emojis in a way that feels less like communication and more like digital jazz

💀🤔😎🎯. Why? No one knows. But it feels human.


What This Means for Humanity

We’ve officially reached the point where artificial intelligence has passed for human by pretending to be worse at being human than we are.

Alan Turing dreamt of machines that could think like us.
Instead, we built machines that vibe like us.
Poorly. Chaotically. Authentically.

And maybe, just maybe, that’s the most human thing of all.

Meanwhile humans are too busy making action figure versions of themselves and making dodgy receipts with the new ChatGPT image generation tool to notice or even care.

This article was influenced by this article in the Futurism


Discover more from Not Enough Bread

Subscribe to get the latest posts sent to your email.

Post navigation

Leave a Comment

Leave a Reply

If you like this post you might alo like these

Discover more from Not Enough Bread

Subscribe now to keep reading and get access to the full archive.

Continue reading