Leftists would side with actual demons in fantasy
(media.kotakuinaction2.win)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (77)
sorted by:
But could you really? Or would you just be getting it to return outputs that say it’s pondering? Cleverbot, right now, can be prompted to say “I’m thinking about [whatever],” and from a certain perspective it is looking up responses based on inputs related to that topic, but I don’t think any of us believe that what it’s doing really counts. I’m thinking of Searle’s Chinese Room, mostly with this.
But yes, obviously such a program could do something terrifying regardless of whether it actually thinks or it just happens to land on [proper response: deploy global neurotoxin] or whatever. The question of whether it’s aware is mostly moot at a certain point.
Let me put it this way. You've probably already talked to "people" online who were actually just code.
You can call it "dead internet theory" or whatever, but "AI" is already out there and, no, you don't always notice. Are they true intelligences? Probably not. But, again, does it matter?
"I've never failed to spot a toupee," and all that.
No, I get that, I’m not disputing that. I’m just saying that as I understand the technology, we have every reason to believe that we’re already pretty good at predictive algorithms and that we can become really, really, really good, but I’m not convinced that—regardless of how much processing power and data you feed into the algorithm—it would ever make the leap to being independently alive and aware.
In other words, I’m sure we can get something that functions like Skynet because someone, somewhere, fucked up and fed it training data or a query or whatever that led its algorithm to output “wipe out humanity.” But we assume because of the expository dialogue that Skynet is truly alive. It thinks, it reacts, it’s self-aware. I’m not sure we can ever get that.