Google engineer's chat logs with the AI he claims is sentient.
(cajundiscordian.medium.com)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (56)
sorted by:
For context, a true Turing Test, which has not actually been developed, would require something along the lines of the nature of the responses, not a proper or well-thought out response. You don't give it math problems or ethics dilemmas, you would ask it everyday questions. Basically the same thing you would do to test the political leanings and intelligence of the person you are talking with if you're worried the other person might be fucking nuts and violent. It's not the exact answers but how and what they say.
A proper Turing Test isn't so much a test as it is a two-layer experiment.
You hold thirty conversations with the AI, and thirty conversations with a human. Your initial starting prompt is the same, but the sole interviewer builds off the responses. Then, repeat thirty times with thirty interviewers. Get ninety analysts of various backgrounds to then review the conversations, shuffled at random, and give thirty of them only human-human interactions, thirty only human-AI interactions, and thirty an even mix of the two. Tell each group that it's a randomized mix of interactions. Give them money for each correct response to avoid the Lizardman's Constant. Compare the three groups' guesses.
A "successful" Turing Experiment would show no difference in any of the three groups, all three would guess "Human-AI" vs "Human-human" in the exact same ratios (or within an ANOVA margin of error). A "failed" Turing Experiment would show any other result.