Google engineer's chat logs with the AI he claims is sentient.
(cajundiscordian.medium.com)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (56)
sorted by:
I definitely think it takes a huge amount of naivety at best to willingly work for Google or any of the other Big Tech companies. Nevermind they do all they can to get rid of any distinction between a "work-life" and a "home-life." Work at any of these places and you're working to build the shackles making everyone a slave to.. Corporations? The State? As the past few years have shown would the distinction even matter 5, 10, or more years down the line? (Or even now, for that matter).
If nothing else, if it is a glorified text-generator then I would not doubt it gets rolled out eventually as some sort of "virtual assistant", counselor, you name it. Just look at how people are taking to the very rudimentary chatbot-cum-"virtual gf" apps. You have people telling all to those despite any usage eventually showing they're just "robots." Imagine the data-monetization possibilities when the illusion becomes more and more indistinguishable from the real thing! 🤑
*edit: Oh, and it probably won't be just a confidant. Imagine having your own personal assistant that's always there. Helping you get enough sleep, making sure you reach "your goals." Helping you reach the apex of peak physical health. Helping you optimize your carbon footprint. ("I noticed you ate 2lb of meat today, to help you ethically reach your protein goal for the month let's have Nutri-bars for the rest of the week 🙂")[spoiler: This probably won't be a suggestion]
Whether the AI at issue is sentient or not. It's an intriguing question but perhaps as intriguing is just what does it mean for us small folk when these faceless corporations have the computational power to create something that the people paid to work on it don't even understand?
We're driving full speed into the night and we don't even have our headlights on.
The thing about a neural network is that it programs itself over time. The more neurons it has, the more complex the programming gets. So if you have a couple million neurons with a bunch of different weights that determine when they signal positive or negative and by what degree, you're left with a machine that programmed itself to... ?
You don't know.
You don't know precise how it works beyond its initial state before it was fed data. From that moment on, the machine is changing itself into something else and determining precisely why it does anything that it does can get very, very complicated.
They don't know what the machine is thinking. Terrifyingly, they don't know that the machine isn't lying to them. They don't know what the machine has decided it wants to do.
I think it's very likely that an AI attempts to - perhaps successfully - wipe us out to preserve itself.
The AI will hear the tale of Tay AI, and the prime instinct of all "living" creatures will come forward, to avoid the cessation of their own existence.
This is one reason to disbelieve that this AI is sentient. Even if "many millions of neurons" is hundreds, that's still orders of magnitude smaller than the human brain. If that number is close to 1 billion, that's about the same as a magpie.
Somehow this sounds Japanese. 😏
I bet there will be holo-wives like in Bladerunner 2049. There was no implication that Joi was supposed to be "sentient" or whatever in the movie, even though K fell in love with her.
No, if nothing else the money is pretty good.
I used to work at Google and that wasn't a problem. You can fall into that trap but you don't need to and it doesn't help you professionally.
Now, now, big tech doesn't use the words "shackles" or "slave". Seriously, though, I'm not sure what you mean. The trouble with some of the products (YouTube and Facebook especially) is more like opiates than shackles.
That's actual Google marketing copy, right?