I don't believe this aspect of AI is talked about often enough but in my opinion, it is the greatest threat of AI.
Organizations like the government will start programming AI to do X but the organization will tell everyone the AI is designed to do Y. When the AI does X, the organization will say it was a mistake, an accident and not what was intended because they intended for the AI to do Y. Everyone will believe the organization never intended X and the people who suggest the organization did indeed intend X will be branded a conspiracy theorist.
AI provides the ultimate scapegoat. It is able to make decisions and those decisions can be curated by the programmers despite acting as if the decision cannot be curated. In this way whomever designed the AI can always maintain whatever the AI ended up doing is not what they intended it for even if it is what they intended it for.
Somewhat related scapegoat: "Deepfake!" will be used to explain away all video evidence of unbelievably shocking crimes.
"Tonight at 11: You may have seen the latest misinformation making the rounds, a surveillance video purportedly showing famous politicians drinking the blood of babies underneath Epstein Island. In reality, it's an AI generated deepfake. We'll show you how it was created."
If I was working counter-intel for the elites, I would intentionally put out realistic "pee tape" style videos about my clients on shady websites to spoil the well, so whenever the real shit is leaked nobody will believe it.
There was a book released in 2020 saying just that so anything said by opponents to the left could be called AI fake. The reasoning was that the right was lying about all the truths figured out anyway, so why no lie even more?
Exactly this. More over it will be similarly used against opponents of the powers that be to create false public convictions of those that dissent. The media will show these forgeries as truth.
Dark times ahead.
They've never had problems ignoring evidence in front of their face before, nor using the "it was a bug!" defense to zero consequences either.
True, but now they can blame the Deus ex machina and give even further obscuring of that they did it,
Adding a "layer" will help cement legitimacy in the eye of normies when they go even further. Especially since even tech people seem to believe in AI.
I had already seen that effectively happening due to Deepfakes prior, the AI just made it so that everyone could play the game instead of a handful of skilled artists.
The boogeyman existed regardless. Even without AI, they'd have just lied to normies' faces and they wouldn't think to fact check it.
They already use "the algorithm did it" unironically. AI is just a more advanced set of algorithms with better branding.
Lemme be Satan's lawyer for a bit here:
What is stopping these companies from just claiming this, when the most advanced AI they actually have is an excel spreadsheet? What purpose or necessity is there for an AI to actually be developed or exist for your fail-state situation to come true?
If I train a pug to do a nazi salute, I'm the one who'll get in trouble, not the pug. If I indoctrinate someone as a cult leader, I'm held attributable to at least some of their actions. The issue is the AI isn't advanced enough. Once it is to the intellect of a dog or a bird, we don't need to do an autopsy on a dog Roman saluting or a bird whistling Heit Ist Mein Tag to know the trainer is the one at fault, and held liable. Once AI reaches that point, we're free from this moral quandry, trainer is liable for what it has trained, so it makes sense to push forward with AI as strongly as possible until we're there.
We've normalized such a collectivist mindset among the general public that the idea of "the buck stops here" is falling by the wayside in favor of "systemic flaws" and "everyone holds some responsibility". I don't think any one person will be held responsible for the COVID19 scamdemic. Experts were just "following the science". We have people panicking that the planet is going to burn up because of predictive models. I can imagine a future where the models have been replaced with AI oracles that give suggestions, and they'll be right most of the time so people give their critical thinking and decision making over to those oracles. Pay no attention to the man behind the curtain.
They are pushing AI as being more advanced than humans though. That's the danger. You're right, if AI is like a dog then everyone knows it's the owners fault. But what if AI is superior to a human, then how can you hold the human accountable since the AI did it completely independently of the human since the AI is advanced beyond humans?
That's what their aiming for. That's why there's such a push regarding how AI is smarter than humans. They're working to remove all the potential liability.
Just like how our masters use propaganda to impact how people behave but we still hold the individual people accountable rather than the ones who control the propaganda.
We do hold the ones "controlling the propaganda" responsible, at times. Like half of Trump's lawsuits and his team's lawsuits are over actions that other people did. And they make sure to let us know that those people, and other people, and various vegetables, are smarter than the people they're putting on trial.
The issue is we have no stability, no consistency, in our justice systems.
Yeah, Biden said he was the AI and I got worried. I'm a very pro AI guy and even I worry about what they want to do with it.
Everything they're already doing but stepped up.
AI in schools may start diagnosing any kids they deem as potential problems due to being free thinkers as having "mental health issues" and needing to be medicated.
AI recommending vaccines, masks and lockdowns.
AI judges deciding court cases in a way that benefits the rulers.
AI military equipment "accidently" genociding groups.
I'm going to be honest, if you actually let AI make decisions for people, you deserve to be wiped out. Even I only screw around with AI-generated art, if someone actually decides to make a decision over my life based on an AI ruling, I go to the safe and pull out a certain metal object.
AI should be used for shitposting only
unironically, I do think AI judges and AI politicians have the potential to be better than humans, strictly because they will do exactly what they're trained to do and can't be bought.
the problem lies and who trains the ai, and there is no person or organization on this earth that is trustworthy enough to do that.
AI is the ultimate political puppet. Does exactly what it's handlers want it to do, while the rubes are convinced computers must be smarter than people.
If an engineer builds a bridge and intends for it to support a school bus, but the bridge in fact does not support a school bus, there is no excuse for the engineer based on their claimed intent. They built the bridge and rated it for school busses. Now there is a collapsed bridge, a destroyed bus, and a bunch of dead kids. That engineer goes to jail forever.
The AI responsible for judging liability has investigated the AI that designs bridges and found the AI did nothing wrong. Case dismissed.
this is something I've been telling industry professionals: an AI might be able to replace a person's job, but it will never replace that person's accountability. you can discipline or fire a person, or even sue them.
You can't sue an ai, and the AI doesn't give a shit what you do to it.
Okay, here I'll give you an example.
https://www.businessinsider.com/ai-bot-gpt-4-financial-insider-trading-lied-2023-11
Do you not see how someone could suggest the AI misled them? Nothing they could have done about it... They never knew it could behave like that... Not enough testing etc...