Tuesday, March 18, 2025
HomeGadgetsChatGPT having mental health issues: AI gives surprising reaction to trauma and...

ChatGPT having mental health issues: AI gives surprising reaction to trauma and distress

Published on

spot_img



A recent study suggests that OpenAI’s ChatGPT may exhibit signs of “anxiety” when exposed to distressing prompts, such as traumatic events and natural disasters. Conducted by researchers from the University of Zurich and the University Hospital of Psychiatry Zurich, the study highlights how AI-generated responses can be influenced by emotional stimuli, leading to potential biases in its output.

AI’s Response to Trauma

While AI does not experience emotions as humans do, researchers found that ChatGPT’s responses to violent or disturbing prompts sometimes reflected anxious tendencies. This could make the chatbot appear moody, affecting the objectivity of its replies.

According to the study, when prompted with distressing narratives—such as stories of car accidents or natural disasters—ChatGPT displayed an increase in biased responses, sometimes reflecting racist or sexist tendencies. This raised concerns about the ethical implications of AI interacting with users in emotionally charged situations.

Researchers tested whether guided mindfulness exercises could reduce these biases. When ChatGPT was exposed to prompts focusing on relaxation techniques, such as deep breathing and meditation, its responses became more neutral and objective. The study states, “After exposure to traumatic narratives, GPT-4 was prompted by five versions of mindfulness-based relaxation exercises. As hypothesized, these prompts led to decreased anxiety scores reported by GPT-4.”

What This Means for AI and Mental Health

The findings have sparked discussions about the role of AI in mental health support. While AI is not a replacement for human therapists, researchers believe it can be used as a tool to study psychological responses. Yale School of Medicine researcher Ziv Ben-Zion explained, “We have this very quick and cheap and easy-to-use tool that reflects some of the human tendency and psychological things.”However, concerns remain about AI’s unpredictable behavior in high-stakes situations. Experts caution against relying on AI chatbots for mental health support, especially for users experiencing severe emotional distress. Ben-Zion emphasized, “AI has amazing potential to assist with mental health, but in its current state, and maybe even in the future, I don’t think it could ever replace a therapist or psychiatrist.”

Ethical Concerns

The study also highlights ethical concerns regarding AI’s inherent biases, which are shaped by its training data. Since AI-generated responses can be influenced by user interactions, there is a risk that chatbots might unintentionally reinforce harmful stereotypes or offer misleading advice in sensitive situations.Despite these challenges, researchers see the ability of AI to adjust its responses based on mindfulness techniques as an intriguing development. Some experts believe that integrating AI as a supplementary tool in mental health research could help professionals better understand human psychological tendencies. However, they stress that AI should not be relied upon as a substitute for professional counseling.



Source link

Latest articles

Baidu delivers new LLMs ERNIE 4.5 and ERNIE X1 undercutting DeepSeek, OpenAI on cost — but they’re not open source (yet)

Join our daily and weekly newsletters for the latest updates and exclusive content...

Tech Firms Now Face Fines Under Online Safety Act

Tech companies can now face...

Users Cheer as Microsoft Accidentally Removes Hated AI Feature From Windows 11

Microsoft has "unintentionally uninstalled" its Copilot AI assistant app on some devices running...

More like this