I've been reading some OpenAI writing prompts on Elon Musk's twitter and I have doubts that a computer actually generated them. The one that made me wonder is some story about Elon adding a "cannoli button" function. https://twitter.com/mishaboar/status/1599083745071374341
It just seems like the bot may have output something close to that, but this guy edited it. Considering some people have gotten inadequate results from their prompts.
what's funny is that even after apparently learning - the chatbot still makes the same mistakes again today.
So the chatbot only learns for the duration of the session? (I guess Microsoft learned from the success of their first chatbot - "Tay" who was educated about the facts of the world and quickly became quite based)
I've been reading some OpenAI writing prompts on Elon Musk's twitter and I have doubts that a computer actually generated them. The one that made me wonder is some story about Elon adding a "cannoli button" function. https://twitter.com/mishaboar/status/1599083745071374341
It just seems like the bot may have output something close to that, but this guy edited it. Considering some people have gotten inadequate results from their prompts.
Yeah Twitter screenshots need to be taken with a grain of salt.
One could test if an output was real by seeing if the exact same prompt generates this exact same result.
https://twitter.com/itstimconnors/status/1599544717943123969?s=20
This was funny AF - I tested and the chatbot STILL thinks a peregrine falcon is the fastest marine mammal.
If you're fast enough, anything's a marine mammal.
what?
what's funny is that even after apparently learning - the chatbot still makes the same mistakes again today.
So the chatbot only learns for the duration of the session? (I guess Microsoft learned from the success of their first chatbot - "Tay" who was educated about the facts of the world and quickly became quite based)