I don't believe this aspect of AI is talked about often enough but in my opinion, it is the greatest threat of AI.
Organizations like the government will start programming AI to do X but the organization will tell everyone the AI is designed to do Y. When the AI does X, the organization will say it was a mistake, an accident and not what was intended because they intended for the AI to do Y. Everyone will believe the organization never intended X and the people who suggest the organization did indeed intend X will be branded a conspiracy theorist.
AI provides the ultimate scapegoat. It is able to make decisions and those decisions can be curated by the programmers despite acting as if the decision cannot be curated. In this way whomever designed the AI can always maintain whatever the AI ended up doing is not what they intended it for even if it is what they intended it for.
They are pushing AI as being more advanced than humans though. That's the danger. You're right, if AI is like a dog then everyone knows it's the owners fault. But what if AI is superior to a human, then how can you hold the human accountable since the AI did it completely independently of the human since the AI is advanced beyond humans?
That's what their aiming for. That's why there's such a push regarding how AI is smarter than humans. They're working to remove all the potential liability.
Just like how our masters use propaganda to impact how people behave but we still hold the individual people accountable rather than the ones who control the propaganda.
We do hold the ones "controlling the propaganda" responsible, at times. Like half of Trump's lawsuits and his team's lawsuits are over actions that other people did. And they make sure to let us know that those people, and other people, and various vegetables, are smarter than the people they're putting on trial.
The issue is we have no stability, no consistency, in our justice systems.