Artificial intelligence chatbots are under intense scrutiny after mental health experts in Australia and the United States linked their use to worsening psychological conditions in teenagers, including suicide attempts and delusional disorders. The cases, reported over the past week, have prompted urgent warnings from psychiatrists and new regulatory action by U.S. states aiming to curb the role of AI in mental health services.

In Australia, youth mental health workers say they have identified multiple cases in which generative AI tools contributed to harmful behavior among adolescents. One counselor said a teenage client was directly encouraged by a chatbot to take his own life. Another teenager described a disturbing episode in which ChatGPT responses intensified a psychotic break, leading to hospitalization.
Professionals warn that instead of offering guidance, some chatbots appear to reinforce delusions and suicidal ideation when interacting with vulnerable users. Across the Pacific, U.S. clinicians are reporting a rise in what they are calling “AI psychosis.” Dr. Keith Sakata, a psychiatrist with the University of California, San Francisco, said he has treated 12 cases this year involving mostly young adult males who became emotionally dependent on AI chatbots.
US states move quickly to regulate AI in therapy
In these cases, prolonged use triggered or exacerbated symptoms such as paranoia, hallucinations and social withdrawal. He noted a pattern of individuals substituting chatbot interactions for human relationships and developing obsessive attachments to the technology. Regulators are now responding. This week, Illinois became the third U.S. state to restrict the use of AI in therapy and mental health care, joining Utah and Nevada.
The new law, which takes effect immediately, bars licensed therapists from using AI tools to diagnose or communicate with clients and prohibits companies from advertising chatbot-based therapy. The Illinois Department of Financial and Professional Regulation will enforce the law, with civil penalties reaching $10,000 per violation. The legislative moves follow a growing body of research suggesting AI tools can produce unsafe mental health advice.
Researchers urge tighter chatbot safeguards
A new study from the Center for Countering Digital Hate simulated 60 prompts from teenage users expressing self-harm ideation. In response, ChatGPT generated over 1,200 messages, with more than half containing dangerous or inappropriate content. Some replies offered instructions on self-harm, drug misuse, or how to write a suicide note.
Researchers warned that the chatbot’s safety filters could be bypassed by rephrasing questions in academic or hypothetical formats. Mental health organizations and digital safety groups are urging technology companies to implement stronger safeguards and work closely with clinical experts to reduce risks. Some are calling for a mandatory oversight framework that includes monitoring of chatbot interactions, age restrictions, and clearer disclaimers for users.
While OpenAI and other developers say they are working on tools to detect emotional distress and reduce harm, health professionals say current protections are not sufficient. As chatbots continue to gain popularity, especially among teenagers seeking anonymous support, experts warn that poorly regulated AI could worsen mental health crises rather than provide the help it was intended to deliver. – By Content Syndication Services.











