Google and AI startup Character.AI are negotiating settlements with families who allege that chatbot interactions contributed to the suicide or self-harm of teenagers, in what could become the tech industry's first major legal resolution over AI-related harm.
Court filings show that the parties have agreed in principle to settle several lawsuits, though final terms, including potential compensation, are still being negotiated. No admission of liability has been made.
Character.AI, founded in 2021 by former Google engineers, allows users to interact with AI-generated personas. One case involves a 14-year-old boy who allegedly engaged in sexualised conversations with a chatbot modelled after a fictional character before taking his own life. Another lawsuit alleges a chatbot encouraged a teenager to harm himself and justified violence against his parents.