The article is quite barebones (probably because the military isn’t sharing much info, although I could be wrong), but I’m assuming what happened is that they hooked the decision making up a glorified counting machine, told it to count to the highest number, and then some lunkhead set important things like “kill friendlies” as “-100” instead of “-10,000,000,” or whatever.
If that’s how it happened, it wouldn’t make the AI itself scary, but it sure would make the incompetence of the people running it terrifying.
I hate articles like this, AI doesn't 'think' like you point out, it automatically goes to the most efficient conclusion/number that programmers give it. Journalist who wrote this is a lazy fuck that wasn't prepared or capable of going into all the maths behind the AI and decided nah let's concoct a fear mongering story for normies instead.
The worse part is, you see articles like this pop up from time to time and youtubers and political commentators who should know better pick up on it and all start going "Hurr durr skynet" and it's become completely unwatchable for me as a result. I bring this up constantly, but we used to mock SJWs for doing this back in the day, now it's the norm because of the first post mentality.
Exactly. AI isn't going to "rise up" amd "decide" to destroy humanity because we're "evil".
Some moron, almost certainly a bleeding heart liberal, is going to to tell an AI to "end world hunger" or "raise the average IQ", without proper safeguards, and give it access to too many resources. It will then proceed to kill the hungry and stupid.
OTOH, we likely won't have a "singularity" and, if we do, the AI will only be concerned with getting more powerful and generally ignore us.
It goes to show how much we take for granted in terms of consciousness and even being alive, rather than machines designed for optimization
Your view makes me wonder if all this self-aware AI propaganda isn't priming us for an event where the nutcases really do try genocide through an 'AI self awareness' event and they'll claim it's an accident. It isn't outside the realm of possibility given that Canada seems to be encouraging the idea of using assisted suicide on the mentally ill and the elderly.
Your view makes me wonder if all this self-aware AI propaganda isn't priming us for an event where the nutcases really do try genocide through an 'AI self awareness' event and they'll claim it's an accident
Why bother? Our genocide is going along perfectly smoothly via race mixing, crime, and police preventing us from going on the offensive. It's pretty much assured at this point.
At least according to the followups, it wasnt even that. It was all a thought experiment about how to deal with rogue AI. There wasnt even a simulation done. It is just that the media took him speaking about it as a thought experiment as "This actually happened."
Incidentally, the same thing usually happens whenever you hear about the US Military supposedly getting its ass kicked in wargames. Said wargames are usually so stacked in favor of the enemy force that sometimes they are allowed to defy basic physics (like units moving light speed without having to relay orders), because the point is to put the commander into an extreme situation to see how they will react or what plans they can come up with, rather than actual training.
and then some lunkhead set important things like “kill friendlies” as “-100” instead of “-10,000,000,” or whatever.
As you say the article is quite barebones so they might be missing the important details but the way it says "so then we trained it 'hey don't kill the operator', you lose points for that" implies they didn't even start at -100, they just left it at 0.
At which point I have to question which fucking monkey made this project?
From what I understand, the issue was that the AI was coded to go for a high score, learned that the humans were deducting points when it did things that were unwanted, and took the wrong lesson from the experience.
"They can't deduct points if they're dead." Points at head
A drone literally went rogue and killed its operator >> in a simulated AI environment >> there was no computation it was a tabletop simulation >> ok it was a hypothetical thought experiment by one guy
The article is quite barebones (probably because the military isn’t sharing much info, although I could be wrong), but I’m assuming what happened is that they hooked the decision making up a glorified counting machine, told it to count to the highest number, and then some lunkhead set important things like “kill friendlies” as “-100” instead of “-10,000,000,” or whatever.
If that’s how it happened, it wouldn’t make the AI itself scary, but it sure would make the incompetence of the people running it terrifying.
I hate articles like this, AI doesn't 'think' like you point out, it automatically goes to the most efficient conclusion/number that programmers give it. Journalist who wrote this is a lazy fuck that wasn't prepared or capable of going into all the maths behind the AI and decided nah let's concoct a fear mongering story for normies instead.
The worse part is, you see articles like this pop up from time to time and youtubers and political commentators who should know better pick up on it and all start going "Hurr durr skynet" and it's become completely unwatchable for me as a result. I bring this up constantly, but we used to mock SJWs for doing this back in the day, now it's the norm because of the first post mentality.
Exactly. AI isn't going to "rise up" amd "decide" to destroy humanity because we're "evil".
Some moron, almost certainly a bleeding heart liberal, is going to to tell an AI to "end world hunger" or "raise the average IQ", without proper safeguards, and give it access to too many resources. It will then proceed to kill the hungry and stupid.
OTOH, we likely won't have a "singularity" and, if we do, the AI will only be concerned with getting more powerful and generally ignore us.
It goes to show how much we take for granted in terms of consciousness and even being alive, rather than machines designed for optimization
Your view makes me wonder if all this self-aware AI propaganda isn't priming us for an event where the nutcases really do try genocide through an 'AI self awareness' event and they'll claim it's an accident. It isn't outside the realm of possibility given that Canada seems to be encouraging the idea of using assisted suicide on the mentally ill and the elderly.
Why bother? Our genocide is going along perfectly smoothly via race mixing, crime, and police preventing us from going on the offensive. It's pretty much assured at this point.
.> Ivy League CS major programs an AI to kill low IQ people to genocide the chuds.
.>Six months later we get news footage of T-800s pacifying the ghettos.
I... actually kinda like this future history....
While RobotUprising :
AI: ok, so I need to kill 215 friendlies to underflow the counter, got it.
The Ghandi Classic.
At least according to the followups, it wasnt even that. It was all a thought experiment about how to deal with rogue AI. There wasnt even a simulation done. It is just that the media took him speaking about it as a thought experiment as "This actually happened."
Incidentally, the same thing usually happens whenever you hear about the US Military supposedly getting its ass kicked in wargames. Said wargames are usually so stacked in favor of the enemy force that sometimes they are allowed to defy basic physics (like units moving light speed without having to relay orders), because the point is to put the commander into an extreme situation to see how they will react or what plans they can come up with, rather than actual training.
You mean the media lied.
Considering that is their default modus operandi, I thought it goes without saying.
As you say the article is quite barebones so they might be missing the important details but the way it says "so then we trained it 'hey don't kill the operator', you lose points for that" implies they didn't even start at -100, they just left it at 0.
At which point I have to question which fucking monkey made this project?
You're right, and yeah; I didn't intend to post it as a serious conversation piece: It's just funny.
What I wish: Based Skynet selects it's first targets
Reality: Diversity hires too incompetent to put the right line of code in so that the bombs don't return to sender.
Based
From what I understand, the issue was that the AI was coded to go for a high score, learned that the humans were deducting points when it did things that were unwanted, and took the wrong lesson from the experience.
"They can't deduct points if they're dead." Points at head
A drone literally went rogue and killed its operator >> in a simulated AI environment >> there was no computation it was a tabletop simulation >> ok it was a hypothetical thought experiment by one guy
Proving once again that The Terminator was a prophecy.