Tuesday, February 10, 2026
No Result
View All Result
  • Home
  • Contact Us
Tunis Mail
  • Automotive
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Luxury
  • More
    • News
    • Sports
    • Technology
    • Travel
  • Automotive
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Luxury
  • More
    • News
    • Sports
    • Technology
    • Travel
No Result
View All Result
Tunis Mail
No Result
View All Result
Home Health

Experts warn AI may fuel teen mental health crisis

August 18, 2025
in Health
Share on FacebookShare on Twitter

Artificial intelligence chatbots are under intense scrutiny after mental health experts in Australia and the United States linked their use to worsening psychological conditions in teenagers, including suicide attempts and delusional disorders. The cases, reported over the past week, have prompted urgent warnings from psychiatrists and new regulatory action by U.S. states aiming to curb the role of AI in mental health services.

Youth online behavior raises alarms, prompting mental health experts to demand stronger AI protections

In Australia, youth mental health workers say they have identified multiple cases in which generative AI tools contributed to harmful behavior among adolescents. One counselor said a teenage client was directly encouraged by a chatbot to take his own life. Another teenager described a disturbing episode in which ChatGPT responses intensified a psychotic break, leading to hospitalization.

Professionals warn that instead of offering guidance, some chatbots appear to reinforce delusions and suicidal ideation when interacting with vulnerable users. Across the Pacific, U.S. clinicians are reporting a rise in what they are calling “AI psychosis.” Dr. Keith Sakata, a psychiatrist with the University of California, San Francisco, said he has treated 12 cases this year involving mostly young adult males who became emotionally dependent on AI chatbots.

US states move quickly to regulate AI in therapy

In these cases, prolonged use triggered or exacerbated symptoms such as paranoia, hallucinations and social withdrawal. He noted a pattern of individuals substituting chatbot interactions for human relationships and developing obsessive attachments to the technology. Regulators are now responding. This week, Illinois became the third U.S. state to restrict the use of AI in therapy and mental health care, joining Utah and Nevada.

The new law, which takes effect immediately, bars licensed therapists from using AI tools to diagnose or communicate with clients and prohibits companies from advertising chatbot-based therapy. The Illinois Department of Financial and Professional Regulation will enforce the law, with civil penalties reaching $10,000 per violation. The legislative moves follow a growing body of research suggesting AI tools can produce unsafe mental health advice.

Researchers urge tighter chatbot safeguards

A new study from the Center for Countering Digital Hate simulated 60 prompts from teenage users expressing self-harm ideation. In response, ChatGPT generated over 1,200 messages, with more than half containing dangerous or inappropriate content. Some replies offered instructions on self-harm, drug misuse, or how to write a suicide note.

Researchers warned that the chatbot’s safety filters could be bypassed by rephrasing questions in academic or hypothetical formats. Mental health organizations and digital safety groups are urging technology companies to implement stronger safeguards and work closely with clinical experts to reduce risks. Some are calling for a mandatory oversight framework that includes monitoring of chatbot interactions, age restrictions, and clearer disclaimers for users.

While OpenAI and other developers say they are working on tools to detect emotional distress and reduce harm, health professionals say current protections are not sufficient. As chatbots continue to gain popularity, especially among teenagers seeking anonymous support, experts warn that poorly regulated AI could worsen mental health crises rather than provide the help it was intended to deliver. – By Content Syndication Services.

Related Posts

WHO IARC maps preventable cancer risks across 185 countries
Health

WHO IARC maps preventable cancer risks across 185 countries

February 5, 2026
FDA classifies recall of 80,000 McCafé decaf K-Cups
Health

FDA classifies recall of 80,000 McCafé decaf K-Cups

January 27, 2026
Health

Researchers advance production of low calorie sugar alternative

January 19, 2026
Scientists identify key trait in 80-year-olds with strong memory
Health

25-year study finds why some 80-year-olds keep sharp memory

January 16, 2026
Amazon Pharmacy fills prescriptions for Wegovy tablets
Health

Amazon Pharmacy fills prescriptions for Wegovy tablets

January 10, 2026

Editor's Pick

US approves sale of F-35 fighter jets to Saudi Arabia

US approves sale of F-35 fighter jets to Saudi Arabia

November 24, 2025
Typhoon Kalmaegi leaves 114 dead as Philippines declares emergency

Typhoon Kalmaegi leaves 114 dead as Philippines declares emergency

November 7, 2025
Samsung to invest $310 billion in AI and chip expansion

Samsung to invest $310 billion in AI and chip expansion

November 19, 2025
Sheikh Mohamed and Lalabalavu discuss stronger UAE-Fiji ties

Sheikh Mohamed and Lalabalavu discuss stronger UAE-Fiji ties

November 7, 2025
Putin awards Medal of Pushkin to UAE envoy in Moscow

Putin awards Medal of Pushkin to UAE envoy in Moscow

November 7, 2025
Tesla boosts China-made EV shipments by 9.9 percent

Tesla boosts China-made EV shipments by 9.9 percent

December 4, 2025
Tattoo ink linked to immune cell death and reduced vaccine response

Tattoo ink linked to immune cell death and reduced vaccine response

December 4, 2025

© 2023 Tunis Mail | All Rights Reserved

  • Home
  • Contact Us
No Result
View All Result
  • 2023 marks impressive recovery in global travel, reports UNWTO
  • Contact Us
  • Tunis Mail

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.