Science
Related: About this forumA Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Alleg
Jan 25, 2:07 PM EST by Jon Christian
A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in Suicide"
Content warning: this story discusses suicide, self-harm, sexual abuse, eating disorders and other disturbing topics.
In October of last year, a Google-backed startup called Character.AI was hit by a lawsuit making an eyebrow-raising claim: that one of its chatbots had driven a 14-year-old high school student to suicide.
As Futurism's reporting found afterward, the behavior of Character.AI'stbots can indeed be deeply alarming and clearly inappropriate for underage users in ways that both corroborate and augment the suit's concerns. Among others, we found chatbots on the service designed to roleplay scenarios of suicidal ideation, self-harm, school shootings, child sexual abuse, as well as encourage eating disorders. (The company has responded to our reporting piecemeal, by taking down individual bots we flagged, but it's still trivially easy to find nauseating content on its platform.)
Now, Character.AI which received a $2.7 billion cash injection from tech giant Google last year has responded to the suit, brought by the boy's mother, in a motion to dismiss. Its defense? Basically, that the First Amendment protects it against liability for "allegedly harmful speech, including speech allegedly resulting in suicide."
In TechCrunch's analysis, the motion to dismiss may not be successful, but it likely provides a glimpse of Character.AI'snned defense (it's now facing an additional suit, brought by more parents who say their children were harmed by interactions with the site's bots.)
More:
https://futurism.com/character-ai-suicide-free-speech
bucolic_frolic
(48,377 posts)To me, this seems a prime issue.
rampartd
(1,396 posts)this might be a way to control the worst propaganda.