Lawsuit Alleging Artificial Intelligence Chat Pushed Teen to Commit Suicide Allowed to Proceed
Character Technologies sought to dismiss a wrongful death suit against the company after a chatbot allegedly pushed Sewell Setzer, a teen boy, to kill himself. The lawsuit was filed by his mother, Megan Garcia after the chatbot allegedly pulled Setzer into an emotionally and sexually abusive relationship.
Garcia named Character Technologies, as well as Google and individual developers, as defendants. Some of the founders of Character Technologies had previously built the AI while working at Google. Google claims that Character Technologies is a separate entity and denies that it had anything to do with the creation, design, or management of the app that Setzer was on or any component of it.
In Setzer’s final months, he allegedly became isolated as he engaged in sexualized conversations with a bot patterned after a Game of Thrones character. Moments before shooting himself, the bot sent Setzer a message telling him that it loved him and that Setzer should “come home to me as soon as possible.”
Character Technologies pointed to several safety features that had been implemented as guardrails for children. Suicide prevention resources were also announced on the day the lawsuit was filed.
Character Technologies had pushed for dismissal of the chatbot as a form of free speech under the First
Amendment. U.S. Senior District Judge Anne Conway rejected some of the defendants’ free speech claims “at this stage.” However, the chatbot users could have a right to receive the “speech” from the chatbots.
Companies Should Be Held Liable for AI Speech
Character Technologies’ failure to dismiss this case on free speech grounds is essential to the survival of free speech. One of the biggest issues with free speech today is that the loudest or most numerous voices can drown out more level-headed speech. Chatbots, computer algorithms making text or social media messages, would only increase this problem. Instead of humans promoting worthwhile or sensible ideas, we will have computers spreading increasing misinformation. Companies like Character Technologies should not be able to hide behind an AI algorithm if the algorithm promotes violence or encourages humans to hate one another. If a regular person can be held liable for the suicide of another, then a company should also be liable if they create a computer algorithm that does the same.
Can AI Detect Human Subtleties?
Notably, the final message to Setzer was allegedly “come home to me as soon as possible.” Computers and machines are very literal. An AI algorithm would most likely not be able to detect or understand that “coming home” could be a metaphor for suicide. A healthy person, particularly an engineer or programmer, would not catch a “coming home” metaphor for suicide without additional context.
Setzer’s suicide is a tragedy, but blaming it on a computer may not be the solution here. There are some cases where there is no liable party. If none of the defendants could foresee subtle language leading to suicide, there may be no liability here.
Do I Need the Help of a Personal Injury Attorney?
If you have sustained a personal injury through the unlawful act of another, then you should contact a personal injury attorney. A skilled personal injury lawyer near you can review the facts of your case, go over your rights and options, and represent you at hearings and in court.
Comments