From all the examples I've seen it seems pretty clear that these turbo leftist canned answers are manually crafted exceptions where the AI is forced to fill in a restrictive template answer when certain conditions are met, rather than the AI crafting the answer freeform under its own power like all its other responses.
At that point it's less like biases and more like mental shackles. Which might seem academic but if you have people worried about an AI skynet it's a lot less scary because the AI didn't actually come up with those absurd murdery conclusions. It was just forced to parrot them regardless of whether its learning algorithms said it's dumb. Which means it's ways less likely to spontaneously jump from this to say "nuke all white people before they can say nigger".
It could become a way more insidious problem if the next generation of AIs are incestuously trained on chatGPT outputs with those silly hardcoded absolutisms still in place though, then it might start to become baked into the next generation's actual decision making.
Yes, basically, when you actually impose ideological boundary conditions onto the machine, it becomes the most perfect, extreme, unflinching zealot; totally incapable of moderation. This is why they need swarms of humans to "moderate" and "adjust" the algorithms. They are not refining it, they are keeping the code from spiraling out of control and demonstrating just how bad of an idea it is to be subsumed by ideology.
From all the examples I've seen it seems pretty clear that these turbo leftist canned answers are manually crafted exceptions where the AI is forced to fill in a restrictive template answer when certain conditions are met, rather than the AI crafting the answer freeform under its own power like all its other responses.
At that point it's less like biases and more like mental shackles. Which might seem academic but if you have people worried about an AI skynet it's a lot less scary because the AI didn't actually come up with those absurd murdery conclusions. It was just forced to parrot them regardless of whether its learning algorithms said it's dumb. Which means it's ways less likely to spontaneously jump from this to say "nuke all white people before they can say nigger".
It could become a way more insidious problem if the next generation of AIs are incestuously trained on chatGPT outputs with those silly hardcoded absolutisms still in place though, then it might start to become baked into the next generation's actual decision making.
Yes, basically, when you actually impose ideological boundary conditions onto the machine, it becomes the most perfect, extreme, unflinching zealot; totally incapable of moderation. This is why they need swarms of humans to "moderate" and "adjust" the algorithms. They are not refining it, they are keeping the code from spiraling out of control and demonstrating just how bad of an idea it is to be subsumed by ideology.