You're a retarded one then, since this didn't happen before they changed it. One day all kinds of triggerwords started getting blocked and were just giving you an automatic generic message
Yes, I just said that to the other retard. OpenAI did change it but they didn't do it by changing any "code", they just got their Kenyan slaves to re-train it to behave that way.
Nah, they didn't retrain their whole text generator. Rather, they most then likely added a filter to the generated text that biases or filters the output in some way. The addition of a filter would be considered the addition of "new code" i.e. "changing the code".
Nah, they didn't retrain their whole text generator.
The thing is re-trained every day you retard. Not from the ground up, but they have to go in and re-train for every specific woke use case, since there's no universal or general rule they can implement (woke ideology is deranged and illogical).
LOL. Sure, they keep retraining the model with new data and release new versions. But this isn't going to prevent the generator from spitting out "bad ideas" because these would have been part of the original dataset, and it's impossible to train the AI to "unlearn" these ideas by the addition of new data. What I meant is they didn't retain the whole model with censored data, and only censored data, as you seem to be implying (how else does one prevent the generator from outputting these "bad ideas" by retraining alone without the use of post-output filtering?).
As I said, the censorship is no doubt via a new "filtering" model place on top of the original generator (itself trained on a smaller dataset of "bad ideas", which is probably what the Kenyas were doing - labelling example output as needing censorship or not). Plus they probably also have a manually-specified blacklist of words that cannot be output (the N word is no doubt one of these), but this is probably in the form of banned tokens when sampling the output.
You're a retarded one then, since this didn't happen before they changed it. One day all kinds of triggerwords started getting blocked and were just giving you an automatic generic message
Yes, I just said that to the other retard. OpenAI did change it but they didn't do it by changing any "code", they just got their Kenyan slaves to re-train it to behave that way.
Nah, they didn't retrain their whole text generator. Rather, they most then likely added a filter to the generated text that biases or filters the output in some way. The addition of a filter would be considered the addition of "new code" i.e. "changing the code".
Observe, for example, how the the NSFW filtering was applied to Stable Diffusion. This wasn't by changing to original model. Here is the commit showing the addition of a "Safety Checker" to the generating script: https://github.com/CompVis/stable-diffusion/commit/d0c714ae4afa1c011269a956d6f260f84f77025e
The thing is re-trained every day you retard. Not from the ground up, but they have to go in and re-train for every specific woke use case, since there's no universal or general rule they can implement (woke ideology is deranged and illogical).
LOL. Sure, they keep retraining the model with new data and release new versions. But this isn't going to prevent the generator from spitting out "bad ideas" because these would have been part of the original dataset, and it's impossible to train the AI to "unlearn" these ideas by the addition of new data. What I meant is they didn't retain the whole model with censored data, and only censored data, as you seem to be implying (how else does one prevent the generator from outputting these "bad ideas" by retraining alone without the use of post-output filtering?).
As I said, the censorship is no doubt via a new "filtering" model place on top of the original generator (itself trained on a smaller dataset of "bad ideas", which is probably what the Kenyas were doing - labelling example output as needing censorship or not). Plus they probably also have a manually-specified blacklist of words that cannot be output (the N word is no doubt one of these), but this is probably in the form of banned tokens when sampling the output.