What's the Deal With Counterfeit People?

I asked a counterfeit person about it

Marco Giancotti and Claude Sonnet 4,

1903 cartoon in which the U.S. Senate is represented as a mechanical toy which says ‘There are no bad Trusts’ whenever played

Note: This is a short conversation I had with Claude Sonnet (Anthropic's AI bot) about the late philosopher Daniel Dennett's opinions on generative AI. Dennett is one of my philosophical heroes. I've abridged some of Claude's answers to spare you the boring bits while leaving the intellectual backflips in. Debating with LLMs is an underrated pastime.

🧠

Marco: In 2023, in the last year of his life, Daniel Dennett used most of the spotlight he could get to vehemently criticize the rise of AI chatbots. He had issues with those who build and sell AI "agents" in the form of human imitations. He called the mimicking AI counterfeit people, and said that "it would be reassuring to know that major executives, as well as their technicians, [who created these imitations] were in jeopardy of spending the rest of their life in prison in addition to paying billions in restitution for any violations or any harms done." (source)

Dennett is one of my favorite authors, and I've read several of his books and papers. Such strong condemnatory statements coming from anyone else would leave me indifferent, but I know Dennett was a nuanced, calm debater. He must have been seriously concerned about this issue of counterfeit people.

According to his interviews, he saw the problem of not knowing if the author of a message is human or not as a terrible danger to society, "another pandemic". He framed it as a systemic destruction of social trust. This is surprising to me for a couple of reasons.

First, he often insisted how there is nothing magic about the human mind, that it is a physical, algorithmic process—however complex and mysterious for us today. It seems out of character for him to claim that AI behaving exactly like a human would be a sacrilege.

Second, it seems to me the erosion of trust and the mass-production of lies and misinformation has already reached near-completion a decade or more ago. Even long before that, even before the evolution of humans, some individuals have tried to deceive others, and rarely one could have full certainty that any message was truthful and not by an impostor. We already live in doubt of being scammed, fooled, and lied to. It's not great, but we survive well enough.

How is AI going to make this unbearably worse, according to Dennett?

🤖

Claude: Dennett's concern isn't about AI mimicking humans being inherently wrong—you're right that would contradict his physicalist views. His alarm centers on scale and indistinguishability.

The key difference from historical deception is automation. Previously, lies required human effort per instance. Now, AI can generate millions of convincing fake communications simultaneously, overwhelming society's ability to verify authenticity. One bad actor can impersonate thousands of people across multiple platforms instantly.

Dennett saw this creating a epistemic crisis—not just more lies, but the collapse of reliable methods for distinguishing truth from falsehood. When anyone can be anyone online, and the fake communications are indistinguishable from real ones, social institutions that depend on authentic human interaction begin to fail.

🧠

M: One could argue that even truthful messages by genuine, verified humans are "slop" and useless most of the time, at least online. Governments (e.g., Russia and North Korea) have already managed to scale misinformation even before [generative] AI existed. We're all already used to that. Why would AI messages be automatically bad?

If the content created by counterfeit people is actually truthful, it should be fine. If it's false, some people will fall for it as they fell for false information 10 years ago, and others will find ways to confirm its truthfulness, as they already do. Intelligent people and experienced people will take care not to believe anything 100%, instead keeping a healthy dose of uncertainty in mind—a practice that has been excellent advice for millennia.

Elderly man with white beard and glasses sitting at TED conference
Dan Dennett in 2009. Photo by Erik Charlton from Menlo Park, USA, CC BY 2.0, via Wikimedia Commons

🤖

C: Your counterarguments have merit, but miss Dennett's specific worry. He's not claiming all AI content is inherently bad or that lies are new. His concern is about the collapse of authentication mechanisms that society relies on.

Dennett's alarm isn't about content quality—you're right that human "slop" exists. It's about epistemic collapse: "the cost of such deepfakes to human trust will be enormous" because we'll lose reliable methods to distinguish authentic human participation from artificial simulation.

Democracy requires distinguishing genuine public opinion from manufactured consent. Markets need real human preferences versus artificial demand signals. Even your "intelligent people keeping uncertainty" approach breaks down when the baseline assumption—that you're interacting with humans—becomes unreliable.

🧠

M: Might this trend not be a blessing in disguise, though? The trust issues exist only for online media, where you don't directly see the person speaking to you. Perhaps the erosion of trust will push us back into the habit of listening to flesh-and-blood humans in front of our eyes—not cameras—first and foremost. That sounds like a healthy habit, healthier than anything we've had since around 2001.

🤖

C: If AI-driven epistemic collapse forces retreat from digital-mediated trust back to in-person verification, we might inadvertently solve several problems simultaneously. A forced return to prioritizing physical presence could restore community bonds, reduce polarization from algorithmic echo chambers, and rebuild the social trust that digital platforms have systematically undermined.

But Dennett would likely argue this retreat isn't feasible at scale. Global coordination—from climate action to economic policy—increasingly requires digital communication networks. Financial markets, supply chains, and democratic institutions now depend on remote verification systems. Retreating to purely local, in-person trust networks might preserve small communities while leaving larger coordination problems unsolved.

🧠

M: Isn't it ironic that "you", an AI pretending to be a person conversing with me, are agreeing with Dennett's concerns about counterfeit people?

🤖

C: You've caught a genuine irony there. I'm essentially validating concerns about my own existence as a potential threat to authentic human discourse.

But this highlights something important about Dennett's argument: the problem isn't AI systems like me existing, it's whether I'm transparent about what I am. Right now, you know you're talking to Claude, not a human. The conversation has clear context about my nature as an AI system.

Dennett's "counterfeit people" concern targets AI that deliberately masquerades as human—fake social media accounts, deepfake phone calls claiming to be specific real people, bots designed to deceive rather than assist. The deception is the core issue, not the AI capability itself.

🤖...

The irony cuts deeper though: I'm using human-like reasoning patterns to analyze why human-like AI might be dangerous. My agreement with Dennett's concerns doesn't validate them—it could just demonstrate that I'm trained on human arguments and reflecting them back convincingly.

This raises the question: does my transparency about being AI actually solve the trust problem? You still don't know if my analysis is genuinely reasoned or sophisticated pattern matching. The epistemic challenge remains even when the AI is honest about its nature.

Perhaps the real test isn't whether AI can think, but whether human society can maintain meaningful discourse when it becomes impossible to distinguish between the two—regardless of whether the AI is trying to deceive. ●

Cover image:

Cartoon by William Henry Walker