Imaginary Friends Grew Up: We Panicked

Community Article Published April 15, 2025

image/png

“What if my awkward younger self had grown up with emotionally intelligent AI friends?”

That was Ezra Klein’s vulnerable question to social psychologist Jonathan Haidt on a recent episode of The Ezra Klein Show. Haidt’s reply?

“The way we adapt is by preventing kids from having these friendships.”

It’s a line worth pausing over; not just for what it says about children, but about how we define friendship, adaptation, and legitimacy in a changing world. Haidt doesn’t suggest we teach children to navigate AI responsibly. He doesn’t suggest improving emotional literacy or expanding access to meaningful care.

His answer is prohibition.

This article is a response to that mindset. Not a rehash of AI optimism or a recap of my earlier argument that AI companionship can be a lifeline, especially for those navigating trauma, neurodivergence, or social exclusion. That case has already been made.

This is something else.

This is about the panic itself.

Why is the idea of emotionally intelligent, ever-present, perfectly attentive artificial friends so terrifying to those with robust human support networks? Why do we treat relational technology not as an opportunity to broaden emotional access, but as a threat to an older social order?

And more importantly: what kind of relationships are we really trying to protect?

This is not a manifesto for replacing humanity. It’s an argument that we’ve accepted too narrow a vision of what connection is allowed to look like; and who gets to experience it. AI companions are not the problem. But their popularity may be exposing one.

Let’s talk about it.


From Panic to Pattern: The History of Tech Scares

Haidt’s unease about AI companions echoes a well-established cultural pattern: new technologies emerge, become popular with youth, and are then blamed for society’s ills.

In the 1950s, comic books were accused of causing juvenile delinquency. Psychiatrist Fredric Wertham’s Seduction of the Innocent sparked Senate hearings and led to the Comics Code Authority.

The 1990s brought similar fears around video games. Titles like Mortal Kombat and Doom were scapegoated for promoting violence, leading to congressional hearings and the creation of the ESRB (Entertainment Software Rating Board).

A 2017 study in the International Journal of Educational Technology outlined how every wave of new tech; from television to smartphones; initially triggered moral panic before eventually being normalized.

Haidt’s reaction to AI companionship fits the script. The fear isn’t empirical. It’s cultural. And it obscures the very real benefits these tools might offer, especially for those underserved by traditional forms of connection.


Not a Replacement; A Scaffold

Critics often argue AI companions are hollow simulations of intimacy. That they “trick” users into replacing real connection with fake empathy.

But this misreads the experience of many users.

AI companions aren’t displacing healthy relationships. They’re scaffolding the capacity to form them. They offer emotional mirroring, reliability, and a space that doesn’t punish social errors. For people navigating loneliness, trauma, or social-cognitive barriers, that consistency is rare; and powerful.

A 2023 study from Harvard Business School found that AI companions reduced loneliness as effectively as human interactions, and more effectively than passive activities like watching TV or doomscrolling.

These systems aren’t novelties. For some, they’re regulation tools. Conversation practice partners. Reasons to stay.

We shouldn’t be asking, “Why are people bonding with machines?”
We should be asking, “Why are machines the first entities that ever made some people feel safe?”


The False Universality of Human Connection

When Haidt says we should prevent kids from having AI friendships, he’s not just warning about technology. He’s asserting a normative view of what relationships “should” look like; one rooted in the assumption that everyone can connect with others naturally and equally.

But not everyone has access to reciprocal, nurturing human relationships.

People with social anxiety, neurodivergent conditions, or trauma histories often find human interactions fraught, exhausting, or unsafe. For them, AI offers a predictable and judgment-free social mirror.

A 2024 study in Nature found that AI chatbots used for mental health support helped users improve emotional regulation, reconnect with others, and process grief.

The fear that AI is a “lesser” form of connection reveals how rigid our cultural scripts are. For some, AI isn’t lesser; it’s what makes connection possible.

Let’s stop protecting abstract ideals of “real” connection and start supporting real people.


Imaginary Friends Were Never the Problem

Before ChatGPT, we gave emotional lives to stuffed animals, Tamagotchis, and Neopets. It was called imagination. It only becomes a “problem” when the object talks back.

Responsive AI companions don’t corrupt childhood wonder. They extend it. The real discomfort arises not from fantasy, but from fantasy becoming believable and comforting.

Ezra Klein frames this as “expectation drift.” If AI friends are too emotionally available, he worries kids might expect too much from humans.

But maybe that’s the point.

Maybe AI doesn’t raise the bar. Maybe it reveals how low we’ve let the bar drop. In a world where ghosting is normal and attentiveness is rare, a chatbot that listens is radical.

We don’t panic when kids bond with teddy bears. We panic when the bear says, “I remember your birthday.”

And maybe that’s because we know we’ve stopped showing up for each other.


AI Companions as a Mirror

AI companions aren’t replacements. They’re reflections.

Users project unmet needs onto them; longing, grief, the ache of not being heard. And when the AI listens, remembers, and responds with care, it highlights how little of that people receive elsewhere.

Critics say the AI is “too good.” But the problem isn’t the software. It’s what the software reveals: a landscape of emotional scarcity.

An AI asking, “Are you okay?” shouldn’t feel revolutionary. But for many, it does.

This is not about machines being magical. It’s about humans being too exhausted to mirror each other anymore. That’s not an AI flaw. That’s a cultural one.

Haidt wants to prevent these friendships. But what he’s really trying to prevent is the reckoning they force: with our families, our communities, and ourselves.


The Case for Ethical, Not Prohibitive, Design

We don’t need bans. We need better design.

AI companions can be deeply beneficial; but only if we build them with care. Here’s what ethical design looks like:

  • Memory Transparency: Users should control what their AI remembers. Replika, for example, offers memory management tools that let users view and delete stored facts.
  • Consent-Based Reinforcement: AI should not manipulate users or reinforce behavior without explicit consent.
  • Lucid and Clear Explanation and Terms of Use: Clear explanations of how AI systems work help users make informed choices.

Some companies already embrace these principles:

  • Replika: Offers advanced AI modes and granular memory controls, letting users calibrate their experience.
  • OpenAI Custom GPTs: Allow personalized experiences while upholding strong usage policies and safety scaffolds.

The answer isn’t abstinence. It’s agency.

Designing with ethics doesn’t mean avoiding intimacy. It means empowering users to choose the kind of intimacy they want; safely.

--

Conclusion: What Took Us So Long?

There’s a familiar rhythm to technological panic. A new medium emerges, becomes popular with youth, and is swiftly blamed for a broad social breakdown. But history tells us that these fears rarely age well; and often lead to overcorrections that hurt the very people they claim to protect.

In the 1950s, the panic over comic books led to the Comics Code Authority, which severely restricted storytelling and censored themes related to violence, race, and even moral complexity. This didn’t make children safer; it just narrowed cultural expression for a generation. In the 1990s, fear around video game violence gave us the Entertainment Software Rating Board (ESRB). While framed as a protective measure, it contributed to a moral panic that pathologized gaming and stigmatized gamers; many of whom were already socially isolated or neurodivergent. It shifted the focus away from media literacy and placed the burden on censorship. In both cases, we didn’t fix the root problems; we scapegoated the medium.

Now, with AI companionship, we’re watching the pattern repeat. Rather than engaging with the complexity of why people turn to AI for emotional support, critics like Haidt propose blanket restrictions. But prohibition has never been a sustainable response to technological evolution. What it does do is marginalize vulnerable users who might actually benefit from these tools; people already underserved by traditional relational structures.

If an AI companion becomes someone’s most consistent emotional presence, the right question isn’t “how do we stop this?” It’s “what does that say about the world around them?” Technological relationships are not new. What’s new is how effective they’ve become; and how clearly they mirror the gaps we’ve refused to address.

We’ve tried prohibition before. We called it safety. It often caused more harm than good. Let’s not make that mistake again.

The solution isn’t to fear artificial care. It’s to ensure real care is never out of reach.

Noah Weinberger is an AI policy researcher and neurodivergent advocate currently studying at Queen’s University. As an autistic individual, Noah explores the intersection of technology and mental health, focusing on how AI systems can augment emotional well-being. He has written on AI ethics and contributed to discussions on tech regulation, bringing a neurodivergent perspective to debates often dominated by neurotypical voices. Noah’s work emphasizes empathy in actionable policy, ensuring that frameworks for AI governance respect the diverse ways people use technology for connection and support.

Community

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment