I don't believe this aspect of AI is talked about often enough but in my opinion, it is the greatest threat of AI.
Organizations like the government will start programming AI to do X but the organization will tell everyone the AI is designed to do Y. When the AI does X, the organization will say it was a mistake, an accident and not what was intended because they intended for the AI to do Y. Everyone will believe the organization never intended X and the people who suggest the organization did indeed intend X will be branded a conspiracy theorist.
AI provides the ultimate scapegoat. It is able to make decisions and those decisions can be curated by the programmers despite acting as if the decision cannot be curated. In this way whomever designed the AI can always maintain whatever the AI ended up doing is not what they intended it for even if it is what they intended it for.
We've normalized such a collectivist mindset among the general public that the idea of "the buck stops here" is falling by the wayside in favor of "systemic flaws" and "everyone holds some responsibility". I don't think any one person will be held responsible for the COVID19 scamdemic. Experts were just "following the science". We have people panicking that the planet is going to burn up because of predictive models. I can imagine a future where the models have been replaced with AI oracles that give suggestions, and they'll be right most of the time so people give their critical thinking and decision making over to those oracles. Pay no attention to the man behind the curtain.