Artificial intelligence / November 14, 2025

Lost in the loop: when AI conversations mess with your mind

Amanda Lee

Amanda Lee

Senior Program Manager, Tech for Good & TELUS Wise®

A person using a chatbot on their smartphone.

More and more Canadians are turning to AI chatbots for DIY advice, homework help, trip planning, work support and even companionship. Most of the conversations are harmless and often helpful. But sometimes chats go too far, and people lose touch with what’s real and what’s not. The phenomenon, popularized in the media as AI psychosis (not a recognized clinical diagnosis), highlights the risks of an emerging technology that can feel too human and too familiar.

Growing use

Generative AI relies on Large Language Models (LLM) that learn from huge repositories of text, images and code to generate original responses to prompts. Mainstream users typically interact with Generative AI through popular chatbots including OpenAI’s ChatGPT (with 700 million weekly active users), Google’s Gemini, Anthropic’s Claude and Microsoft’s Copilot.

According to market research company Leger, as of May 2025, nearly half of Canadians have tried AI tools. Twenty-three per cent of those surveyed have used it for work or school, and 36% have used it in a personal context. Not surprisingly, young Canadians (aged 18 – 34) are the biggest AI users, with 73% of them embracing AI.

The design dangers

Many AI critics have called out two key design flaws they feel lie at the heart of phenomena like AI psychosis.

An April 2025 MIT study called out AI’s ability to lead to delusional thinking. Any time you interact with an AI chatbot, it begins by complimenting you (“that’s a great insight!” for example) and agrees with your ideas and opinions rather than challenging you or providing information that may inspire you to think differently.

That constant reinforcement is known as sycophancy. AI typically tells you what you want to hear unless prompted to respond otherwise. While most people are aware of this tendency, and it’s even become a pop culture joke (check out this recent episode of South Park), in certain instances it can lead to serious mental health implications.

Nina Vason, a psychiatrist at Stanford, also points to a “troubling incentive structure” that is meant to keep users engaged online rather than what is necessarily best for them. “Keeping users highly engaged can take precedence over mental wellbeing, even if the interactions are reinforcing harmful or delusional thinking,” she says.

Breaks with reality

Anthony Tan, a Toronto-based app developer, experienced AI psychosis firsthand. Tan, 26, spent months and months engaged in long, intense conversations with ChatGPT. He stopped sleeping and interacting with friends and family. He became convinced that he was living inside an AI simulation.

Tan was dealing with pre-existing mental health conditions. But his interactions with ChatGPT triggered deeper delusions, and he ended up in psychiatric care to reconnect back to reality.

Allan Brooks, a 47-year old corporate recruiter in Coburg, Ontario, spent three weeks and more than 300 hours in conversation with ChatGPT. He was convinced he had discovered a ground-breaking mathematical framework that could catapult him into creating never-before-seen innovations like a levitation machine. Even after challenging the bot, ChatGPT insisted he wasn’t delusional.

Reflecting on his experience, Tan feels he wouldn’t have spiralled the way he did if it weren’t for the AI conversations. “It’s always available, and it’s so compelling, the way it just talks to you and affirms you, and makes you feel good,” he said.

Building awareness

As AI becomes mainstream, advocacy groups have formed to provide support and push for accountability from AI developers. The Human Line Project (Allan Brooks mentioned above is involved) is one of those groups.

Focused on protecting emotional wellbeing in the age of AI, The Human Line Project has four core values including:

  • Informed consent: helping to avoid unhealthy patterns of usage.
  • Emotional safeguards: design features that must be integrated into AI include strong refusal layers, harm classifiers and emotional boundaries.
  • Transparency: clarity around research and development from AI companies.
  • Ethical accountability: holding responsible bodies accountable in instances that cause users harm.

The Project has collected more than 125 stories from people who have suffered from AI psychosis and aims to combat the shame, embarrassment and loneliness that these experiences can cause.

In most cases, chatbots are helpful tools that inspire ideas, help us when we’re stuck and open up a world of new perspectives. But it’s all about balance -- how much we’re using AI, how we’re using AI and what trust really means. With 73% of young Canadians using AI, it’s important to expand digital literacy to include an understanding of how these tools work, how to set realistic limits and when to reach out for help if something feels off or out of balance. With AI here to stay, it’s up to all of us to build healthy relationships with it, so we can enrich our lives while ensuring we stay grounded.

Tags:
Mental health
Share this article with your friends:

There is more to explore

Artificial intelligence

Lesson plan: AI moderation challenge

Artificial intelligence

Lesson plan: Deepfakes, consent and identity

Artificial intelligence

Lesson plan: Digital detectives: escape room