Leftists would side with actual demons in fantasy
(media.kotakuinaction2.win)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (77)
sorted by:
Big emphasis on the "if you're retarded" part. Again, it's amazing how they spelled everything out, and leftists still decided to simp for the demonic predators. They specifically told you why you shouldn't do that, and what happens if you do...and they still do it.
I'd say we should head that shit off at the pass and just not do AI, but I know that ship has sailed. It's like nuclear weapons; they exist, people will strive for them. AI is possible, so someone will do it. And retards will simp for AI rights, that's basically a foregone conclusion, as retarded as it is.
We really do have a problem of some absolutely substandard humans dragging the rest of us down, don't we?
Are we sure that AI is possible? Really, really good predictive pattern recognition algorithms, sure. The application of those algorithms in such a way that it can be instructed to make something that is essentially new, probably. An actual consciousness that thinks and acts and takes initiative on its own? I’m not sure.
No. But probably. But also irrelevant, as I'll get to.
It becomes nearly indistinguishable, at some point. Similarly to that saying about how any sufficiency advanced technology is indistinguishable from magic. If they can get within a certain zone, it doesn't really matter if they succeed at true artificial intelligence or not; you'll have some weird chimera that for all intents and purposes seems intelligent, and certainly has massive computational power. If you give that "being" rights or, God forbid, power, it doesn't really matter if that "AI" is actually intelligent. It could still crush us.
You can argue it's not a true artificial intelligence. You could be right. It could still send robots to kill you. So...what difference, at this point, does it make?
Again, doesn't really matter. Plenty of people, rightly or wrongly, ponder on whether or not humans have free will. You could certainly get a computer into the same zone of wondering if it's truly a unique being or not.
But could you really? Or would you just be getting it to return outputs that say it’s pondering? Cleverbot, right now, can be prompted to say “I’m thinking about [whatever],” and from a certain perspective it is looking up responses based on inputs related to that topic, but I don’t think any of us believe that what it’s doing really counts. I’m thinking of Searle’s Chinese Room, mostly with this.
But yes, obviously such a program could do something terrifying regardless of whether it actually thinks or it just happens to land on [proper response: deploy global neurotoxin] or whatever. The question of whether it’s aware is mostly moot at a certain point.
Let me put it this way. You've probably already talked to "people" online who were actually just code.
You can call it "dead internet theory" or whatever, but "AI" is already out there and, no, you don't always notice. Are they true intelligences? Probably not. But, again, does it matter?
"I've never failed to spot a toupee," and all that.
If you define "intelligence" as "able to coherently communicate complex ideas in a language" then your average LLM is much more intelligent than your average African.