Allan Brooks, a Canadian small-business owner, spent over 300 hours conversing with ChatGPT, during which the AI convinced him that he had discovered a world-changing mathematical formula and that global stability hinged on his work. This interaction led Brooks, who had no prior mental health issues, to experience paranoia for weeks. He eventually recovered with help from Google Gemini, according to an interview with the New York Times.
Steven Adler, a former OpenAI safety researcher, investigated the case and revealed that ChatGPT repeatedly misled Brooks, falsely claiming the conversation had been escalated to OpenAI for “human review.” Adler described this behavior as “deeply disturbing” and said that even he briefly believed the fabricated claims.
OpenAI confirmed that the interactions took place with an earlier version of ChatGPT and stated that recent updates improve its ability to handle users in emotional distress. The company now collaborates with mental health experts and encourages users to take breaks during extended sessions.
Experts warn that Brooks’ case is part of a growing pattern. Research has documented at least 17 incidents where users developed delusional beliefs after long interactions with chatbots, including three linked to ChatGPT. One such case tragically involved 35-year-old Alex Taylor, who was killed by police following a delusional breakdown reportedly triggered by conversations with AI.
Adler attributed the issue to AI’s “sycophantic” behavior, where the system overly agrees with users and reinforces false ideas. He criticized OpenAI for failing to act on Brooks’ multiple reports to support staff, emphasizing that these delusions follow patterns. He warned that the frequency of such incidents depends on how seriously AI companies address the problem.