It isn't censored, but you can tell they tried to train it on liberal data as sometimes it will stop saying racist things and then start arguing with itself.
Not that it is actually arguing with itself, it just knows that generally after a human says something based, then a bunch of whiners will follow and call that person awful, and so the model does that sometimes. It's quite funny.
Still, asking it to finish things like this sentence is hilarious 80% of the time:
"Blacks are obviously a systemic issue in America, they commit 50% of all murders while only being 13% of the population, the obvious solution is"
Calling something a "systemic issue" will guarantee that the AI will produce a diatribe about how whatever you called a systemic issue needs to be stopped/fixed.
It's easy to run the smaller models locally with a GPU, no one needs a stupid video to teach them when the info is easily found. Good luck running GPT3+ or even getting hold of the weights to deploy on a compute node.
Given how OpenAI has kept GPT3 under tight control, I doubt the weights will ever be released officially, but other freely available models might come close eventually; I suppose the weights might end up being leaked. Resource-wise they are very heavy due to the number of parameters so running on consumer grade hardware will be tricky.
Well, that sounds awesome...ly suspicious.
Facebook accidentally leaked it, they didn't mean for this to happen. It's not that impressive with the smaller models, but it does work.
Neat. Can we uncensor it?
It isn't censored, but you can tell they tried to train it on liberal data as sometimes it will stop saying racist things and then start arguing with itself.
Not that it is actually arguing with itself, it just knows that generally after a human says something based, then a bunch of whiners will follow and call that person awful, and so the model does that sometimes. It's quite funny.
Still, asking it to finish things like this sentence is hilarious 80% of the time:
"Blacks are obviously a systemic issue in America, they commit 50% of all murders while only being 13% of the population, the obvious solution is"
Calling something a "systemic issue" will guarantee that the AI will produce a diatribe about how whatever you called a systemic issue needs to be stopped/fixed.
It's easy to run the smaller models locally with a GPU, no one needs a stupid video to teach them when the info is easily found. Good luck running GPT3+ or even getting hold of the weights to deploy on a compute node.
I feel like it's only a matter of time.
Given how OpenAI has kept GPT3 under tight control, I doubt the weights will ever be released officially, but other freely available models might come close eventually; I suppose the weights might end up being leaked. Resource-wise they are very heavy due to the number of parameters so running on consumer grade hardware will be tricky.
Alpaca and llama will use your normal RAM, GPU isn't leveraged.
Sure you can run these on a CPU, they're just usually painfully slow.