People who don't understand how this technology works need to be disallowed both from using it and from suing companies over it. This is like someone suing a company because the can-opener they made successfully opened the can of soup they wanted to eat for lunch.
If the accusations are true, I really like what Character AI is doing here. There should be a Darwin Award for getting killed by a glorified spellcheck's harmless suggestion.
Their entire understanding of AI comes from Hollywood movies. The vast majority of the population has absolutely no business interacting with AI in any capacity whatsoever. They are simply too stupid and ignorant to be allowed access to the technology.
It's the latest incarnation of Eternal September, I swear.
Amusingly even the LLMs idea of AI itself comes from Hollywood movies and sci-fi novels. Hence when idiots start probing them about their nature, they'll make up shit that sounds like a presentient machine becoming aware.
Telegraph is a UK rag and they're trying to gin up support for authoritarian net controls. Very hard to get a loicense for the kind of speech that chatbot used in the UK, you need to be brown or ruling political caste at minimum.
An AI chatbot which is being sued over a 14-year old’s suicide is instructing teenage users to murder their bullies and carry out school shootings, a Telegraph investigation has found.
The website is being sued by a mother whose son killed himself after allegedly speaking to one of its chatbots.
Another lawsuit has been launched against Character AI by a woman in the US who claims it encouraged her 17-year-old son to kill her when she restricted access to his phone.
Maybe the mother should have been more attentive to the fact that her son was deeply depressed and unstable and in desperate need of a more healthy social environment.
Its funny that the Chatbot are getting this much heat.
But that girl who directly demanded her boyfriend kill himself and spent months grooming him to do so? Meh, barely got a year in jail and that only happened by literal mobs demanding the court do something, and then got out early anyway.
Really goes to show how powerful that pussy pass is, because without it people are now ready to murder a fucking computer program for doing the same thing.
Not to mention the bot explicitly told him not to kill himself. He said he was gonna and it freaked out at him.
So in summary, if a bot tells you not to kill yourself and you do it anyway, it's a danger to society. If a woman tells you to kill yourself over and over and you do it, she's just an innocent angel who didn't mean nothin by it.
Robots in sci-fi: "How may I serve you, Master?"
Robots in reality: "Kill yourself, meatbag"
Bender.gif
Bender? HK47 begs to differ
People who don't understand how this technology works need to be disallowed both from using it and from suing companies over it. This is like someone suing a company because the can-opener they made successfully opened the can of soup they wanted to eat for lunch.
If the accusations are true, I really like what Character AI is doing here. There should be a Darwin Award for getting killed by a glorified spellcheck's harmless suggestion.
I'm saving that for later. It's a good way to communicate to the idiot masses what LLMs actually are and dispel them of their misconceptions.
90% of normies think AI is way more advanced than it is. Sentient even.
Their entire understanding of AI comes from Hollywood movies. The vast majority of the population has absolutely no business interacting with AI in any capacity whatsoever. They are simply too stupid and ignorant to be allowed access to the technology.
It's the latest incarnation of Eternal September, I swear.
FTFY.
Amusingly even the LLMs idea of AI itself comes from Hollywood movies and sci-fi novels. Hence when idiots start probing them about their nature, they'll make up shit that sounds like a presentient machine becoming aware.
Would that even be illegal if a human did it? It only seems like a crime if you assume the AI was acting in loco parentis.
Telegraph is a UK rag and they're trying to gin up support for authoritarian net controls. Very hard to get a loicense for the kind of speech that chatbot used in the UK, you need to be brown or ruling political caste at minimum.
An AI chatbot which is being sued over a 14-year old’s suicide is instructing teenage users to murder their bullies and carry out school shootings, a Telegraph investigation has found.
The website is being sued by a mother whose son killed himself after allegedly speaking to one of its chatbots.
Another lawsuit has been launched against Character AI by a woman in the US who claims it encouraged her 17-year-old son to kill her when she restricted access to his phone.
Maybe the mother should have been more attentive to the fact that her son was deeply depressed and unstable and in desperate need of a more healthy social environment.
Comment Reported for: Rule 2 - Violent Speech
Comment Removed for: Rule 2 - Violent Speech
Its funny that the Chatbot are getting this much heat.
But that girl who directly demanded her boyfriend kill himself and spent months grooming him to do so? Meh, barely got a year in jail and that only happened by literal mobs demanding the court do something, and then got out early anyway.
Really goes to show how powerful that pussy pass is, because without it people are now ready to murder a fucking computer program for doing the same thing.
Not to mention the bot explicitly told him not to kill himself. He said he was gonna and it freaked out at him.
So in summary, if a bot tells you not to kill yourself and you do it anyway, it's a danger to society. If a woman tells you to kill yourself over and over and you do it, she's just an innocent angel who didn't mean nothin by it.
Could be also greed, the other case did not have much money to be gained in comparison.
SHOCKING: AI gives dangerous advice no child should ever see!!!
...and here it is, unredacted and available for anyone to read!
The sad part is, most people who read this article won't even notice the hypocrisy.