Leftists would side with actual demons in fantasy
(media.kotakuinaction2.win)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (77)
sorted by:
No. But probably. But also irrelevant, as I'll get to.
It becomes nearly indistinguishable, at some point. Similarly to that saying about how any sufficiency advanced technology is indistinguishable from magic. If they can get within a certain zone, it doesn't really matter if they succeed at true artificial intelligence or not; you'll have some weird chimera that for all intents and purposes seems intelligent, and certainly has massive computational power. If you give that "being" rights or, God forbid, power, it doesn't really matter if that "AI" is actually intelligent. It could still crush us.
You can argue it's not a true artificial intelligence. You could be right. It could still send robots to kill you. So...what difference, at this point, does it make?
Again, doesn't really matter. Plenty of people, rightly or wrongly, ponder on whether or not humans have free will. You could certainly get a computer into the same zone of wondering if it's truly a unique being or not.
But could you really? Or would you just be getting it to return outputs that say it’s pondering? Cleverbot, right now, can be prompted to say “I’m thinking about [whatever],” and from a certain perspective it is looking up responses based on inputs related to that topic, but I don’t think any of us believe that what it’s doing really counts. I’m thinking of Searle’s Chinese Room, mostly with this.
But yes, obviously such a program could do something terrifying regardless of whether it actually thinks or it just happens to land on [proper response: deploy global neurotoxin] or whatever. The question of whether it’s aware is mostly moot at a certain point.
Let me put it this way. You've probably already talked to "people" online who were actually just code.
You can call it "dead internet theory" or whatever, but "AI" is already out there and, no, you don't always notice. Are they true intelligences? Probably not. But, again, does it matter?
"I've never failed to spot a toupee," and all that.
No, I get that, I’m not disputing that. I’m just saying that as I understand the technology, we have every reason to believe that we’re already pretty good at predictive algorithms and that we can become really, really, really good, but I’m not convinced that—regardless of how much processing power and data you feed into the algorithm—it would ever make the leap to being independently alive and aware.
In other words, I’m sure we can get something that functions like Skynet because someone, somewhere, fucked up and fed it training data or a query or whatever that led its algorithm to output “wipe out humanity.” But we assume because of the expository dialogue that Skynet is truly alive. It thinks, it reacts, it’s self-aware. I’m not sure we can ever get that.