I don't believe this aspect of AI is talked about often enough but in my opinion, it is the greatest threat of AI.
Organizations like the government will start programming AI to do X but the organization will tell everyone the AI is designed to do Y. When the AI does X, the organization will say it was a mistake, an accident and not what was intended because they intended for the AI to do Y. Everyone will believe the organization never intended X and the people who suggest the organization did indeed intend X will be branded a conspiracy theorist.
AI provides the ultimate scapegoat. It is able to make decisions and those decisions can be curated by the programmers despite acting as if the decision cannot be curated. In this way whomever designed the AI can always maintain whatever the AI ended up doing is not what they intended it for even if it is what they intended it for.
They've never had problems ignoring evidence in front of their face before, nor using the "it was a bug!" defense to zero consequences either.
True, but now they can blame the Deus ex machina and give even further obscuring of that they did it,
Adding a "layer" will help cement legitimacy in the eye of normies when they go even further. Especially since even tech people seem to believe in AI.
I had already seen that effectively happening due to Deepfakes prior, the AI just made it so that everyone could play the game instead of a handful of skilled artists.
The boogeyman existed regardless. Even without AI, they'd have just lied to normies' faces and they wouldn't think to fact check it.
They already use "the algorithm did it" unironically. AI is just a more advanced set of algorithms with better branding.