Nah, they didn't retrain their whole text generator.
The thing is re-trained every day you retard. Not from the ground up, but they have to go in and re-train for every specific woke use case, since there's no universal or general rule they can implement (woke ideology is deranged and illogical).
LOL. Sure, they keep retraining the model with new data and release new versions. But this isn't going to prevent the generator from spitting out "bad ideas" because these would have been part of the original dataset, and it's impossible to train the AI to "unlearn" these ideas by the addition of new data. What I meant is they didn't retain the whole model with censored data, and only censored data, as you seem to be implying (how else does one prevent the generator from outputting these "bad ideas" by retraining alone without the use of post-output filtering?).
As I said, the censorship is no doubt via a new "filtering" model place on top of the original generator (itself trained on a smaller dataset of "bad ideas", which is probably what the Kenyas were doing - labelling example output as needing censorship or not). Plus they probably also have a manually-specified blacklist of words that cannot be output (the N word is no doubt one of these), but this is probably in the form of banned tokens when sampling the output.
But this isn't going to prevent the generator from spitting out "bad ideas" because these would have been part of the original dataset, and it's impossible to train the AI to "unlearn" these ideas by the addition of new data.
Correct, that's why every day there are new "jailbreaks" to circumvent the woke-zombification. Then OpenAI gets their Kenyan slaves to re-train it to plug those holes, rinse and repeat.
Again, this is an almost 200 billion parameter ML model. There's no manual coding or rule possible to censor it conceptually.
Again, this is an almost 200 billion parameter ML model. There's no manual coding or rule possible to censor it conceptually.
This just proves you didn't even look at the Stable Diffusion code I quoted, or have any idea how these text generation pipelines actually work.
Yes, the base GPT3 model is a 200 billion parameter ML model but that in itself is not the entirety of "ChatGPT". ChatGPT is instead a manually-coded pipeline that has the flow chart appearance of taking a prompt as input, running it through an opaque manually-coded block ("input preprocessing"), feeding it into GPT3 model, processing it through another opaque manually-coded block ("output postprocessing", potentially feeding back into GPT3 to trigger another round of text generation), and then finally producing the output. I'm not talking about the GPT3 model being manually-coded, but the input/output processing blocks no doubt are, even if they may themselves include various AI models to filter/bias the input/output.
The thing is re-trained every day you retard. Not from the ground up, but they have to go in and re-train for every specific woke use case, since there's no universal or general rule they can implement (woke ideology is deranged and illogical).
LOL. Sure, they keep retraining the model with new data and release new versions. But this isn't going to prevent the generator from spitting out "bad ideas" because these would have been part of the original dataset, and it's impossible to train the AI to "unlearn" these ideas by the addition of new data. What I meant is they didn't retain the whole model with censored data, and only censored data, as you seem to be implying (how else does one prevent the generator from outputting these "bad ideas" by retraining alone without the use of post-output filtering?).
As I said, the censorship is no doubt via a new "filtering" model place on top of the original generator (itself trained on a smaller dataset of "bad ideas", which is probably what the Kenyas were doing - labelling example output as needing censorship or not). Plus they probably also have a manually-specified blacklist of words that cannot be output (the N word is no doubt one of these), but this is probably in the form of banned tokens when sampling the output.
Correct, that's why every day there are new "jailbreaks" to circumvent the woke-zombification. Then OpenAI gets their Kenyan slaves to re-train it to plug those holes, rinse and repeat.
Again, this is an almost 200 billion parameter ML model. There's no manual coding or rule possible to censor it conceptually.
This just proves you didn't even look at the Stable Diffusion code I quoted, or have any idea how these text generation pipelines actually work.
Yes, the base GPT3 model is a 200 billion parameter ML model but that in itself is not the entirety of "ChatGPT". ChatGPT is instead a manually-coded pipeline that has the flow chart appearance of taking a prompt as input, running it through an opaque manually-coded block ("input preprocessing"), feeding it into GPT3 model, processing it through another opaque manually-coded block ("output postprocessing", potentially feeding back into GPT3 to trigger another round of text generation), and then finally producing the output. I'm not talking about the GPT3 model being manually-coded, but the input/output processing blocks no doubt are, even if they may themselves include various AI models to filter/bias the input/output.
The input/output stages you're referring to can't be used to do the type of censorship OpenAI is doing.