Close Menu
    What's Hot

    China inflation hits 1% in March as PPI turns positive

    April 10, 2026

    Abdullah bin Zayed, Kaja Kallas review UAE-EU ties

    April 10, 2026

    Bahrain and UK review regional tensions and economic risks

    April 10, 2026
    Facebook X (Twitter) Instagram
    Arab ColumnistArab Columnist
    • Automotive
    • Business
    • Entertainment
    • Health
    • Lifestyle
    • Luxury
    • News
    • Sports
    • Technology
    • Travel
    Arab ColumnistArab Columnist
    Home » Experts warn AI may fuel teen mental health crisis
    Health

    Experts warn AI may fuel teen mental health crisis

    August 18, 2025
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Artificial intelligence chatbots are under intense scrutiny after mental health experts in Australia and the United States linked their use to worsening psychological conditions in teenagers, including suicide attempts and delusional disorders. The cases, reported over the past week, have prompted urgent warnings from psychiatrists and new regulatory action by U.S. states aiming to curb the role of AI in mental health services.

    Youth online behavior raises alarms, prompting mental health experts to demand stronger AI protections

    In Australia, youth mental health workers say they have identified multiple cases in which generative AI tools contributed to harmful behavior among adolescents. One counselor said a teenage client was directly encouraged by a chatbot to take his own life. Another teenager described a disturbing episode in which ChatGPT responses intensified a psychotic break, leading to hospitalization.

    Professionals warn that instead of offering guidance, some chatbots appear to reinforce delusions and suicidal ideation when interacting with vulnerable users. Across the Pacific, U.S. clinicians are reporting a rise in what they are calling “AI psychosis.” Dr. Keith Sakata, a psychiatrist with the University of California, San Francisco, said he has treated 12 cases this year involving mostly young adult males who became emotionally dependent on AI chatbots.

    US states move quickly to regulate AI in therapy

    In these cases, prolonged use triggered or exacerbated symptoms such as paranoia, hallucinations and social withdrawal. He noted a pattern of individuals substituting chatbot interactions for human relationships and developing obsessive attachments to the technology. Regulators are now responding. This week, Illinois became the third U.S. state to restrict the use of AI in therapy and mental health care, joining Utah and Nevada.

    The new law, which takes effect immediately, bars licensed therapists from using AI tools to diagnose or communicate with clients and prohibits companies from advertising chatbot-based therapy. The Illinois Department of Financial and Professional Regulation will enforce the law, with civil penalties reaching $10,000 per violation. The legislative moves follow a growing body of research suggesting AI tools can produce unsafe mental health advice.

    Researchers urge tighter chatbot safeguards

    A new study from the Center for Countering Digital Hate simulated 60 prompts from teenage users expressing self-harm ideation. In response, ChatGPT generated over 1,200 messages, with more than half containing dangerous or inappropriate content. Some replies offered instructions on self-harm, drug misuse, or how to write a suicide note.

    Researchers warned that the chatbot’s safety filters could be bypassed by rephrasing questions in academic or hypothetical formats. Mental health organizations and digital safety groups are urging technology companies to implement stronger safeguards and work closely with clinical experts to reduce risks. Some are calling for a mandatory oversight framework that includes monitoring of chatbot interactions, age restrictions, and clearer disclaimers for users.

    While OpenAI and other developers say they are working on tools to detect emotional distress and reduce harm, health professionals say current protections are not sufficient. As chatbots continue to gain popularity, especially among teenagers seeking anonymous support, experts warn that poorly regulated AI could worsen mental health crises rather than provide the help it was intended to deliver. – By Content Syndication Services.

    Related Posts

    DR Congo lifts national mpox emergency after two years

    April 3, 2026

    UNICEF and partners launch $300m child nutrition drive

    March 13, 2026

    WHO IARC maps preventable cancer risks across 185 countries

    February 4, 2026

    FDA classifies recall of 80,000 McCafé decaf K-Cups

    January 27, 2026
    Latest News

    China inflation hits 1% in March as PPI turns positive

    April 10, 2026

    Abdullah bin Zayed, Kaja Kallas review UAE-EU ties

    April 10, 2026

    Bahrain and UK review regional tensions and economic risks

    April 10, 2026

    Nikkei falls as caution returns to Tokyo stocks

    April 9, 2026

    Gold steadies as markets watch US-Iran talks

    April 9, 2026

    Japan current account surplus hits $24.8 billion in February

    April 9, 2026
    © 2022 Arab Columnist | All Rights Reserved
    • Home
    • Contact Us

    Type above and press Enter to search. Press Esc to cancel.