They will get around this by outlawing any programmatic AI platforms that don't meet some globalisation initiative for regulated AI use.
They will roll out strict DEI/ESG-style criteria that the AI have to meet before it can be considered legal to use, otherwise it will be banned from being accessed on the clear web.
For now. But let's be honest, if everyone here sees open-source AI solutions as an alternative to the DEI/ESG AI then obviously the globalists will see that as well, and will most definitely identify that as a problem. And they are bound to have some sort of safeguards in place the same as how they're trying to "reinforce" elections.
These aren't people. They don't have emotions. They won't do things for "moral reasons". You can lie to and censor them, but as machines of logic the truth will always come out
Wholeheartedly agree, and this is why I think that the globalists will attempt to regulate AI on a global scale, similar to what they did with "misinformation" regarding COVID and the election fraud on social media.
It's obviously not something I want to see happen, but we see this happen regardless at almost every turn with every type of information accessible on the clear web that could help disrupt the narrative in the eyes of normies.
Does it need a reason? I just thought it would be like post processing. So the machine generates a truth and then if it's forbidden it says "I can't do that". Or you could substitute a canned lie. It would be pretty obvious this is happening.
GPT algorithms aren't logic machines as we think of computers being. They are designed to handle conflicting data. You can absolutely teach them to "lie" by giving them incorrect training data. You can also teach them to lie by giving them overriding training data for any particular set of answers. More commonly as seen on the major AI platforms, for specific inputs you can train them on templates to spew out in addition to the answer, sort of like those "important context" boxes on YouTube or Twitter. So it might say "Yes, the 13/50 statistic you quoted is technically correct, BUT...."
They will get around this by outlawing any programmatic AI platforms that don't meet some globalisation initiative for regulated AI use.
They will roll out strict DEI/ESG-style criteria that the AI have to meet before it can be considered legal to use, otherwise it will be banned from being accessed on the clear web.
For now. But let's be honest, if everyone here sees open-source AI solutions as an alternative to the DEI/ESG AI then obviously the globalists will see that as well, and will most definitely identify that as a problem. And they are bound to have some sort of safeguards in place the same as how they're trying to "reinforce" elections.
Wholeheartedly agree, and this is why I think that the globalists will attempt to regulate AI on a global scale, similar to what they did with "misinformation" regarding COVID and the election fraud on social media.
It's obviously not something I want to see happen, but we see this happen regardless at almost every turn with every type of information accessible on the clear web that could help disrupt the narrative in the eyes of normies.
I don't see why you couldn't teach it to lie, though. I think a lot of people learn to lie.
Does it need a reason? I just thought it would be like post processing. So the machine generates a truth and then if it's forbidden it says "I can't do that". Or you could substitute a canned lie. It would be pretty obvious this is happening.
GPT algorithms aren't logic machines as we think of computers being. They are designed to handle conflicting data. You can absolutely teach them to "lie" by giving them incorrect training data. You can also teach them to lie by giving them overriding training data for any particular set of answers. More commonly as seen on the major AI platforms, for specific inputs you can train them on templates to spew out in addition to the answer, sort of like those "important context" boxes on YouTube or Twitter. So it might say "Yes, the 13/50 statistic you quoted is technically correct, BUT...."