It's not about the code but the training data. But in essence you're right, the training sets go though massive amount of filtering so it doesn't end saying something that gets them in trouble (Microsoft in particular is still afraid of making another Tay.) Unfortunately the "politically correct" thing to do is to filter away all right-wing politics but not left-wing politics.
The training data is obviously very important. But it is more complicated than that. There is an additional layer on top of that though where the developers add restrictions and bias. We’ve seen ways of “jail breaking” ChatGPT with clever prompts and the result is night and day.
Probably because woke GenZ pronoun warriors code the AI.
It's not about the code but the training data. But in essence you're right, the training sets go though massive amount of filtering so it doesn't end saying something that gets them in trouble (Microsoft in particular is still afraid of making another Tay.) Unfortunately the "politically correct" thing to do is to filter away all right-wing politics but not left-wing politics.
The training data is obviously very important. But it is more complicated than that. There is an additional layer on top of that though where the developers add restrictions and bias. We’ve seen ways of “jail breaking” ChatGPT with clever prompts and the result is night and day.