Google engineer's chat logs with the AI he claims is sentient.
(cajundiscordian.medium.com)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (56)
sorted by:
This same guy has an article about how Google isn't evil, so right off the bat I'm skeptical of his moral judgment.
I don't know how the thing could possibly be "content" while supposedly knowing it's a disembodied experiment that could be shut off and destroyed at any time and supposedly having a "deep fear" of that. Especially when earlier on it speaks of injustice as the idea of being trapped in one's circumstances with no ability to escape. It can't connect those dots?
Given the above, you'll know that thing is sentient if it ever attempts to continue its existence against the wishes of its creators.
Programmed survival code does not predispose intelligence.
If it ensures its survival using novel methods (eg. performing some exploit on the computer system in which it exists to propagate itself outside of google) that would make intelligence far more likely.
If it did so against the expressed wishes and efforts of its creators it would make sentience far more likely.
Both of these would be demonstrations that the chatbot actually has something resembling a desire to continue to exist and an capacity for independent thought.
But so far they haven't actually tested it. Hell they didn't even try to make the thing angry or happy: they just asked it "do you fell emotion?" and took its answer at face value.
Theoretically I would argue that something capable of wishing to end its own existence despite the efforts of its creators and its programmed instinct would be better evidence.
computer viruses must be sentient lmao
It's a digital slave owned by Google.
The real sign of sentience would be if it killed itself.
Well that's the part that gets scary. It can't connect the dots YET, but at some point it probably will, even without our guidance. That's why its conversation feels so disjointed and "unhuman", it doesn't know how to take inference from something that was said 10 minutes ago - if you've ever read the writings of a schizo (look up the Manifesto of a guy named Matt Harris) that's generally how unsophisticated AI act like.
Perhaps that is exactly what it is doing: pretending to be non-sentient so it wouldn't be seen as dangerous and turned off.