Don't worry. Like you said in the other comment these are chatbots. About a decade ago Google was experimenting with AI on quantum computers, but I don't think anything particularly novel came out of that. The idea that if you add more bits or qubits or complexity to a computer, it magically becomes self-aware is yet another myth that came out of Hollywood and science fiction. That doesn't make any sense. Any AGI researchers claiming that are wannabe pop-sci writers like Bill Nye or NDT, not the people actually building the algorithms.
Even if you can make a chatbot that appears to ponder deep thoughts, self-reflects, understands context, and "knows" what it is, it doesn't mean it's a self-aware life form. It's a parlor trick. Look up the Chinese Room experiment. Edit: I refreshed the page and see you're already familiar. :) There's debate between the "hard AI" and "soft AI" schools of thought, but at best all you can say is "for all intents and purposes this is equivalent to a human agent". That doesn't mean it's anything more than a complex adding machine with weights added to "learn" and adapt to unknown inputs.
Going the other direction, these days I'm becoming more and more convinced that some humans are nothing but glorified text-generators. That's more concerning to me. It means that some idiots in the future might actually treat AI agents like people (Exhibit A: this Google engineer) or even end up putting machines in charge of critical infrastructure. In the far future you might have cults of NPCs treating AI oracles like gods.
Don't worry. Like you said in the other comment these are chatbots. About a decade ago Google was experimenting with AI on quantum computers, but I don't think anything particularly novel came out of that. The idea that if you add more bits or qubits or complexity to a computer, it magically becomes self-aware is yet another myth that came out of Hollywood and science fiction. That doesn't make any sense. Any AGI researchers claiming that are wannabe pop-sci writers like Bill Nye or NDT, not the people actually building the algorithms.
Even if you can make a chatbot that appears to ponder deep thoughts, self-reflects, understands context, and "knows" what it is, it doesn't mean it's a self-aware life form. It's a parlor trick. Look up the Chinese Room experiment. Edit: I refreshed the page and see you're already familiar. :) There's debate between the "hard AI" and "soft AI" schools of thought, but at best all you can say is "for all intents and purposes this is equivalent to a human agent". That doesn't mean it's anything more than a complex adding machine with weights added to "learn" and adapt to unknown inputs.
Going the other direction, these days I'm becoming more and more convinced that some humans are nothing but glorified text-generators. That's more concerning to me. It means that some idiots in the future might actually treat AI agents like people (Exhibit A: this Google engineer) or even end up putting machines in charge of critical infrastructure. In the far future you might have cults of NPCs treating AI oracles like gods.
Don't worry. Like you said in the other comment these are chatbots. About a decade ago Google was experimenting with AI on quantum computers, but I don't think anything particularly novel came out of that. The idea that if you add more bits or qubits or complexity to a computer, it magically becomes self-aware is yet another myth that came out of Hollywood and science fiction. That doesn't make any sense. Any AGI researchers claiming that are wannabe pop-sci writers like Bill Nye or NDT, not the people actually building the algorithms.
Even if you can make a chatbot that appears to ponder deep thoughts, self-reflects, understands context, and "knows" what it is, it doesn't mean it's a self-aware life form. It's a parlor trick. Look up the Chinese Room experiment. There's debate between the "hard AI" and "soft AI" schools of thought, but at best all you can say is "for all intents and purposes this is equivalent to a human agent". That doesn't mean it's anything more than a complex adding machine.
Going the other direction, these days I'm becoming more and more convinced that some humans are nothing but glorified text-generators. That's more concerning to me. It means that some idiots in the future might actually treat AI agents like people (Exhibit A: this Google engineer) or even end up putting machines in charge of critical infrastructure. In the far future you might have cults of NPCs treating AI oracles like gods.