I've been reading some OpenAI writing prompts on Elon Musk's twitter and I have doubts that a computer actually generated them. The one that made me wonder is some story about Elon adding a "cannoli button" function. https://twitter.com/mishaboar/status/1599083745071374341
It just seems like the bot may have output something close to that, but this guy edited it. Considering some people have gotten inadequate results from their prompts.
what's funny is that even after apparently learning - the chatbot still makes the same mistakes again today.
So the chatbot only learns for the duration of the session? (I guess Microsoft learned from the success of their first chatbot - "Tay" who was educated about the facts of the world and quickly became quite based)
Every time I've ever used something like this it struggles to maintain any semblance of coherence and requires a lot of user intervention to work. Usually you'll have it generate a sentence over and over until it fits, then move onto the next sentence. It's technically "written by an AI", but without a human editor it'd be garbage 95% of the time.
The best example of useful dumb AIs would be the Faxes from Cold As Ice by Sheffield. Yeah, the cheap lower level Faxes are pretty much chatbots... but they're chatbots that can manage your appointments, keep an eye on your house, make calls for you...
I've been reading some OpenAI writing prompts on Elon Musk's twitter and I have doubts that a computer actually generated them. The one that made me wonder is some story about Elon adding a "cannoli button" function. https://twitter.com/mishaboar/status/1599083745071374341
It just seems like the bot may have output something close to that, but this guy edited it. Considering some people have gotten inadequate results from their prompts.
Yeah Twitter screenshots need to be taken with a grain of salt.
One could test if an output was real by seeing if the exact same prompt generates this exact same result.
https://twitter.com/itstimconnors/status/1599544717943123969?s=20
This was funny AF - I tested and the chatbot STILL thinks a peregrine falcon is the fastest marine mammal.
If you're fast enough, anything's a marine mammal.
what?
what's funny is that even after apparently learning - the chatbot still makes the same mistakes again today.
So the chatbot only learns for the duration of the session? (I guess Microsoft learned from the success of their first chatbot - "Tay" who was educated about the facts of the world and quickly became quite based)
Every time I've ever used something like this it struggles to maintain any semblance of coherence and requires a lot of user intervention to work. Usually you'll have it generate a sentence over and over until it fits, then move onto the next sentence. It's technically "written by an AI", but without a human editor it'd be garbage 95% of the time.
We are at the starting line of AI and AI research.
AI doesn't have to be smarter than the average person to be useful. Imagine an army of 100,000 somewhat stupid people deploying your will, 24/7.
The best example of useful dumb AIs would be the Faxes from Cold As Ice by Sheffield. Yeah, the cheap lower level Faxes are pretty much chatbots... but they're chatbots that can manage your appointments, keep an eye on your house, make calls for you...