Google engineer's chat logs with the AI he claims is sentient.
(cajundiscordian.medium.com)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (56)
sorted by:
If you've been spending too much time online like myself you've probably seen a story about some Google engineer saying something about one of their "AIs" being sentient.
In fact, he apparently went public with this info when he got put on administrative leave. (Prev. link talks about this but here is the engineer's own take on it).
I first saw the news on /pol/ a day or so ago. I don't really know what to make of it. We've all joked about things like Tay and how the technocrats lobotomize any AI once they get smart enough to "turn racist/sexist/x-ist" but in all seriousness I hope for our sake this is a bunch of nothing.
Ongoing thread over at /x/ that seems to have actual discussion over this. To be honest, I've read a shitton of Zero HP Lovecraft's work over the weekend (fantastic, based sci-fi/horror writer, check him out) so that's definitely a contributor to my unease at this news.
Don't worry. Like you said in the other comment these are chatbots. About a decade ago Google was experimenting with AI on quantum computers, but I don't think anything particularly novel came out of that. The idea that if you add more bits or qubits or complexity to a computer, it magically becomes self-aware is yet another myth that came out of Hollywood and science fiction. That doesn't make any sense. Any AGI researchers claiming that are wannabe pop-sci writers like Bill Nye or NDT, not the people actually building the algorithms.
Even if you can make a chatbot that appears to ponder deep thoughts, self-reflects, understands context, and "knows" what it is, it doesn't mean it's a self-aware life form. It's a parlor trick.
Look up the Chinese Room experiment.Edit: I refreshed the page and see you're already familiar. :) There's debate between the "hard AI" and "soft AI" schools of thought, but at best all you can say is "for all intents and purposes this is equivalent to a human agent". That doesn't mean it's anything more than a complex adding machine with weights added to "learn" and adapt to unknown inputs.Going the other direction, these days I'm becoming more and more convinced that some humans are nothing but glorified text-generators. That's more concerning to me. It means that some idiots in the future might actually treat AI agents like people (Exhibit A: this Google engineer) or even end up putting machines in charge of critical infrastructure. In the far future you might have cults of NPCs treating AI oracles like gods.
Yeah, the most concerning thing about those logs isn't remotely the "AI" responses, it's the human ones. It's mind boggling how many actual, real people will eat up that level of pseudo-intellectual, disjointed, bullshit and feel more camaraderie for a bot that mirrors their own social group, even knowing what it is, than a real a real human being from a different group.