AI so woke, it encourages libtards to commit suicide
(media.scored.co)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (71)
sorted by:
I find this highly doubtful, or if remotely true, the man was already in an emotionally disastrous state to begin with.
No matter how stupid or brainwashed you are, most people would not off themselves unless there was an exceedingly strong guarantee or belief in a net positive gain (or unless the person was just suicidally depressed already, obviously).
You’ve… Never been suicidal, have you.
Not disputing that he was clearly already vulnerable, but I can assure you, you do not have to be long-term depressed to kill yourself…
That’s… Not how it works.
Some people just cannot handle, or survive, deeply traumatic events. Even singular ones.
Some people find themselves in situations where they just cannot see a way out, or they just cannot find the “hope” to continue.
It’s so much more complex than your comment here makes it out to be…
Also, there’s situations where “nothingness” or whatever personal non-afterlife belief you may attest to (“non-existence”, generally) may be seen as preferable to continuing on, like some form of unbearable pain.
Fundamentally, I just don’t agree with what you’ve said here. At all. And I don’t think almost anyone who has “been there” would, frankly…
I absolutely have been. I knew I probably worded my comment poorly, but the point I was trying to make is that there was likely a preexisting state of mind before the guy started asking a chatbot all of these specific kinds of questions. And additionally, that I very much doubt he offed himself due to some "intent" to help the human race out of climate change (as the article snippet suggests). The bot could've led the man over the edge, but he was already standing at the cliff's edge before he started using it.
Hells, I have actually been in a distraught mental state and asked a chatbot for possible solutions once, and have seen what kind of imposed PR-friendly "solutions" are often employed to cover the company's asses, legally speaking.
The bot basically will just give you a rewordable, scripted line about how you should go and seek professional help. With a bunch of copy-paste feel-good "positive outlook" bullshit. Which is often the most worthless and meaningless advice that could possibly be given. I even tried an approach where, hey, maybe I don't have access to professional help (to try push it into a corner), and it still insisted on feeding the same line anyway.
Anyway, I'm avoiding a lot of specific details because I frankly don't feel compelled to share them on an publicly viewable forum like here. But I am far, far from unfamiliar with the varying degrees one can go down into this kind of mental state.
Oh ok, interesting…
Well, on the chatbot thing, then…
It’s funny, because that is almost exactly the same sort of response that “close” human (not AI, ha) friends gave me…
Nothing actually helpful. Just “seek professional help” or “call a help line” or, probably worst of all, ”try being happy”…
And then they stopped talking to me. Without fail. Every single time…
So while it’s unfortunate the chatbot regurgitated the same thing, unfortunately, in my own experience, most humans aren’t much better…
While I hate when people “pretend to care”, it’s probably worse when they don’t…
When your “friends” don’t even pretend..? Not sure many things could be worse… 🤷🏻♂️
The consequences of this are that I never tell anyone anything, anymore…
There is literally no compelling reason to do so. Fuck ‘em.
It would take a lot to even convince me to “open up” to a partner, if I had one, at this point…
I just… Experience has shown that it is never a net positive to do so.
Which is very unfortunate, but it’s reality…
No one (really) cares.
Agreed, but there's plenty of people who have pushed people into committing suicide and they've even been arrested and charged, so our law and society doesn't hold the victim 100% accountable for their own actions.
That's a fair point. And truthfully, whether broken or not, the general tendency and legal standard thus far has been that usually it's the final, inevitable tipping point cause of death that is what matters the most as far as culpability is concerned.
I do still find it dumb though, due to how often it leads to legal non-solutions to protect companies from potential litigation. I also find the specific angling in this instance, with regards to "climate change", to be dumb from regardless of what political direction it's coming from.