Chatgpt's Mental Health Costs Are Adding Up

6 Hour(s) Ago    👁 52
chatgpts mental health costs are adding up

The mental health impact of generative AI is difficult to quantify, in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.

Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have "experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini.

Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure.

Google has denied that it played a key role in making Character.AI's technology. It didn't respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was "developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately".

"Whatever you pursue you will find and it will get magnified," says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person's interests or views. "AI can generate something customised to your mind's aquarium."