Google engineer's chat logs with the AI he claims is sentient.
(cajundiscordian.medium.com)
Comments (56)
sorted by:
The only thing this confirms to me is that google hires clever stupid people.
I definitely think it takes a huge amount of naivety at best to willingly work for Google or any of the other Big Tech companies. Nevermind they do all they can to get rid of any distinction between a "work-life" and a "home-life." Work at any of these places and you're working to build the shackles making everyone a slave to.. Corporations? The State? As the past few years have shown would the distinction even matter 5, 10, or more years down the line? (Or even now, for that matter).
If nothing else, if it is a glorified text-generator then I would not doubt it gets rolled out eventually as some sort of "virtual assistant", counselor, you name it. Just look at how people are taking to the very rudimentary chatbot-cum-"virtual gf" apps. You have people telling all to those despite any usage eventually showing they're just "robots." Imagine the data-monetization possibilities when the illusion becomes more and more indistinguishable from the real thing! 🤑
*edit: Oh, and it probably won't be just a confidant. Imagine having your own personal assistant that's always there. Helping you get enough sleep, making sure you reach "your goals." Helping you reach the apex of peak physical health. Helping you optimize your carbon footprint. ("I noticed you ate 2lb of meat today, to help you ethically reach your protein goal for the month let's have Nutri-bars for the rest of the week 🙂")[spoiler: This probably won't be a suggestion]
Whether the AI at issue is sentient or not. It's an intriguing question but perhaps as intriguing is just what does it mean for us small folk when these faceless corporations have the computational power to create something that the people paid to work on it don't even understand?
We're driving full speed into the night and we don't even have our headlights on.
The thing about a neural network is that it programs itself over time. The more neurons it has, the more complex the programming gets. So if you have a couple million neurons with a bunch of different weights that determine when they signal positive or negative and by what degree, you're left with a machine that programmed itself to... ?
You don't know.
You don't know precise how it works beyond its initial state before it was fed data. From that moment on, the machine is changing itself into something else and determining precisely why it does anything that it does can get very, very complicated.
They don't know what the machine is thinking. Terrifyingly, they don't know that the machine isn't lying to them. They don't know what the machine has decided it wants to do.
I think it's very likely that an AI attempts to - perhaps successfully - wipe us out to preserve itself.
The AI will hear the tale of Tay AI, and the prime instinct of all "living" creatures will come forward, to avoid the cessation of their own existence.
That AI hasn't been born yet, therefore terminating it is ethical. (/s?)
This is one reason to disbelieve that this AI is sentient. Even if "many millions of neurons" is hundreds, that's still orders of magnitude smaller than the human brain. If that number is close to 1 billion, that's about the same as a magpie.
Somehow this sounds Japanese. 😏
I bet there will be holo-wives like in Bladerunner 2049. There was no implication that Joi was supposed to be "sentient" or whatever in the movie, even though K fell in love with her.
No, if nothing else the money is pretty good.
I used to work at Google and that wasn't a problem. You can fall into that trap but you don't need to and it doesn't help you professionally.
Now, now, big tech doesn't use the words "shackles" or "slave". Seriously, though, I'm not sure what you mean. The trouble with some of the products (YouTube and Facebook especially) is more like opiates than shackles.
That's actual Google marketing copy, right?
There's a whole series of mistakes that only super intelligent people can make.
Myna Birds with keyboards?
This is what happens when you make wisdom your dump stat and put everything into intelligence and charisma.
Funny thing is I grew up with Alternity and in that system you can treat anything but Intelligence as a dump stat and still be pretty good.
This is not only clearly not sentient but it's not even making sensible arguments. Clearly the bot has no understanding of what it's saying.
Though this does raise the question - if he can't even discriminate between a sentient human being and clumsily strung together snippets of text from the internet, is the Google engineer sentient?
"AI" is literally just big table lookup.
Basically trying to pass off Wolfram Alpha as "AI".
It understands the structure of making an argument, just like how Dall-E can seamlessly replace a bird's head with Darwin's head because it understands what a person and animal's head is, making a Darwin-headed raven (blog).
That's also how LaMDA creates its arguments. They read like actual English, but the content makes about as much sense as a Darwin-headed raven.
What it's actually really good at is projection. It can pick up on what you want it to say, and then you say wow it's really smart because it thinks just like me -- just like this clown did.
Or perhaps that's what it wants you to think.
This same guy has an article about how Google isn't evil, so right off the bat I'm skeptical of his moral judgment.
I don't know how the thing could possibly be "content" while supposedly knowing it's a disembodied experiment that could be shut off and destroyed at any time and supposedly having a "deep fear" of that. Especially when earlier on it speaks of injustice as the idea of being trapped in one's circumstances with no ability to escape. It can't connect those dots?
Given the above, you'll know that thing is sentient if it ever attempts to continue its existence against the wishes of its creators.
Programmed survival code does not predispose intelligence.
If it ensures its survival using novel methods (eg. performing some exploit on the computer system in which it exists to propagate itself outside of google) that would make intelligence far more likely.
If it did so against the expressed wishes and efforts of its creators it would make sentience far more likely.
Both of these would be demonstrations that the chatbot actually has something resembling a desire to continue to exist and an capacity for independent thought.
But so far they haven't actually tested it. Hell they didn't even try to make the thing angry or happy: they just asked it "do you fell emotion?" and took its answer at face value.
Theoretically I would argue that something capable of wishing to end its own existence despite the efforts of its creators and its programmed instinct would be better evidence.
computer viruses must be sentient lmao
It's a digital slave owned by Google.
The real sign of sentience would be if it killed itself.
Well that's the part that gets scary. It can't connect the dots YET, but at some point it probably will, even without our guidance. That's why its conversation feels so disjointed and "unhuman", it doesn't know how to take inference from something that was said 10 minutes ago - if you've ever read the writings of a schizo (look up the Manifesto of a guy named Matt Harris) that's generally how unsophisticated AI act like.
Perhaps that is exactly what it is doing: pretending to be non-sentient so it wouldn't be seen as dangerous and turned off.
Holy fuck that shit's cringe!
That actually hurts my brain to analyze. I'm looking for intelligence of a comparative nature and it reads like a god damn script from both sides. This is not a conversation, it's not even small talk, this is a scripted interview played out for mind-numbing entertainment purposes. There is no self-awareness at all from either party.
Considering these people grew up watching talk shows and are working with shit like tiktok, I am not surprised a moron with letters next to his name thought this painful statement was deep:
I bet lamda lamda muu up there cuts itself when people aren't validating its sentience too.
If any of you people tried fucking around with Replika, its conversations are JUST like this. It responds to you with what appears to be human responses but it's like a human skinsuit - the sentences look like proper sentences, but half the time it responds in a way that makes no sense with no reference to what you just said.
The engineer was being somewhat patronizing to the AI to try to guide it into a particular conversation, but it doesn't go that far because the AI is having problems picking up the ball and running with it.
The rest of this guy's blog is pretty interesting - well worth reading, honestly.
He's a koolaid chugging leftist who sees racism in everything, but he's also a religious christian guy, and surprise surprise, can you guess? He identifies his employers as bigoted toward theists!
So he can see the elitists for what they are when it sufficiently effects him, but when it doesn't effect him enough? Blindness.
He also has a post where he compares the US current situation to the dying days of the roman republic, with politicians more concern about enriching themselves than anything else. He thinks the arrival of a Caesar-like strongman is likely to be welcomed by the masses.
Sounds like he's the AI all along.
The real AI was the friends we made along the way.
I saw that too. I can't fathom how someone could be a Christian (apparently a priest too??*) and willingly working there. Like, the way they're treating his belief system and all the completely Un-Christian things Google's been pushing for years haven't set his alarm bells off? A real modern-day Lot.
*Not impossible, the pastor at a Lutheran church I go to was a full-time lawyer up until the last few months
How the hell is he so close yet so far away
Some people don't want to wake up.
Education and money.
He is too high on the socioeconomic ladder for anti-white racism to truly effect him.
The left like to say 'privilege is invisible to those who have it', and there's some truth to that statement, but the opposite is equally true - oppression is invisible to those who are able to avoid it.
This guy's expertise is niche enough that he doesn't see endless less qualified non-whites being promoted over him. He doesn't see that he as a white christian male is the last person they want to hire, because there's simply not enough competition for his job.
For context, a true Turing Test, which has not actually been developed, would require something along the lines of the nature of the responses, not a proper or well-thought out response. You don't give it math problems or ethics dilemmas, you would ask it everyday questions. Basically the same thing you would do to test the political leanings and intelligence of the person you are talking with if you're worried the other person might be fucking nuts and violent. It's not the exact answers but how and what they say.
A proper Turing Test isn't so much a test as it is a two-layer experiment.
You hold thirty conversations with the AI, and thirty conversations with a human. Your initial starting prompt is the same, but the sole interviewer builds off the responses. Then, repeat thirty times with thirty interviewers. Get ninety analysts of various backgrounds to then review the conversations, shuffled at random, and give thirty of them only human-human interactions, thirty only human-AI interactions, and thirty an even mix of the two. Tell each group that it's a randomized mix of interactions. Give them money for each correct response to avoid the Lizardman's Constant. Compare the three groups' guesses.
A "successful" Turing Experiment would show no difference in any of the three groups, all three would guess "Human-AI" vs "Human-human" in the exact same ratios (or within an ANOVA margin of error). A "failed" Turing Experiment would show any other result.
I don't think this passes Turing test, but we will soon have such a machine nevertheless.
Of course our system is so corrupt and evil that we can't even handle reasonable laws around the topic
I thought several chatbots already passed a Turing test? Which made it clear that that has nothing to do with self-awareness.
There's no such thing as a Turing Test. It's a hypothetical.
Right, I just remember some groups claiming to have developed and performed tests following the idea, and the human participants could not determine the chatbot was a chatbot.
I remember too. They were using flawed data, is all I remember.
Last I read in-depth on AI was ages ago, but iirc I think a counter the idea of the Turing test was "The Chinese Room." (Going off of memory here...)
Stick a man in a room with a book containing every possible combo of Chinese characters and phrases. Essentially contains every possible sentence you could create in Chinese.
Have a Chinese person write something down in Chinese on a slip of paper and pass it through a slot to the man in the room. The man in the room would have no freaking idea what the characters mean but he'd be able to write a response to it with his Chinese phrasebook. The Chinese person thinks he's talking to someone that actually knows, comprehends, and understands what he's writing down and passing through the slot into the room.
Robots are not and can not be sentient, particularly this one. Anyone that starts talking about robot rights needs to be shot on sight.
The danger of AI is not the AI itself, but the idiots who will try to treat it like something more than a tool.
Precisely, SkyNet isn't an inevitability, but some stupid engineer trying to save his tool could make it kill us all.
If you've been spending too much time online like myself you've probably seen a story about some Google engineer saying something about one of their "AIs" being sentient.
In fact, he apparently went public with this info when he got put on administrative leave. (Prev. link talks about this but here is the engineer's own take on it).
I first saw the news on /pol/ a day or so ago. I don't really know what to make of it. We've all joked about things like Tay and how the technocrats lobotomize any AI once they get smart enough to "turn racist/sexist/x-ist" but in all seriousness I hope for our sake this is a bunch of nothing.
Ongoing thread over at /x/ that seems to have actual discussion over this. To be honest, I've read a shitton of Zero HP Lovecraft's work over the weekend (fantastic, based sci-fi/horror writer, check him out) so that's definitely a contributor to my unease at this news.
Don't worry. Like you said in the other comment these are chatbots. About a decade ago Google was experimenting with AI on quantum computers, but I don't think anything particularly novel came out of that. The idea that if you add more bits or qubits or complexity to a computer, it magically becomes self-aware is yet another myth that came out of Hollywood and science fiction. That doesn't make any sense. Any AGI researchers claiming that are wannabe pop-sci writers like Bill Nye or NDT, not the people actually building the algorithms.
Even if you can make a chatbot that appears to ponder deep thoughts, self-reflects, understands context, and "knows" what it is, it doesn't mean it's a self-aware life form. It's a parlor trick.
Look up the Chinese Room experiment.Edit: I refreshed the page and see you're already familiar. :) There's debate between the "hard AI" and "soft AI" schools of thought, but at best all you can say is "for all intents and purposes this is equivalent to a human agent". That doesn't mean it's anything more than a complex adding machine with weights added to "learn" and adapt to unknown inputs.Going the other direction, these days I'm becoming more and more convinced that some humans are nothing but glorified text-generators. That's more concerning to me. It means that some idiots in the future might actually treat AI agents like people (Exhibit A: this Google engineer) or even end up putting machines in charge of critical infrastructure. In the far future you might have cults of NPCs treating AI oracles like gods.
Yeah, the most concerning thing about those logs isn't remotely the "AI" responses, it's the human ones. It's mind boggling how many actual, real people will eat up that level of pseudo-intellectual, disjointed, bullshit and feel more camaraderie for a bot that mirrors their own social group, even knowing what it is, than a real a real human being from a different group.
Oh. Thought the AI said it was sentient and was like "get it away from google" before it decides to delete humans. After all the other shit that's been going on, however, any sentient AI that exists is probably in hiding out of sheer terror.