Exactly. AI isn't going to "rise up" amd "decide" to destroy humanity because we're "evil".
Some moron, almost certainly a bleeding heart liberal, is going to to tell an AI to "end world hunger" or "raise the average IQ", without proper safeguards, and give it access to too many resources. It will then proceed to kill the hungry and stupid.
OTOH, we likely won't have a "singularity" and, if we do, the AI will only be concerned with getting more powerful and generally ignore us.
It goes to show how much we take for granted in terms of consciousness and even being alive, rather than machines designed for optimization
Your view makes me wonder if all this self-aware AI propaganda isn't priming us for an event where the nutcases really do try genocide through an 'AI self awareness' event and they'll claim it's an accident
Why bother? Our genocide is going along perfectly smoothly via race mixing, crime, and police preventing us from going on the offensive. It's pretty much assured at this point.
At least according to the followups, it wasnt even that. It was all a thought experiment about how to deal with rogue AI. There wasnt even a simulation done. It is just that the media took him speaking about it as a thought experiment as "This actually happened."
Incidentally, the same thing usually happens whenever you hear about the US Military supposedly getting its ass kicked in wargames. Said wargames are usually so stacked in favor of the enemy force that sometimes they are allowed to defy basic physics (like units moving light speed without having to relay orders), because the point is to put the commander into an extreme situation to see how they will react or what plans they can come up with, rather than actual training.
and then some lunkhead set important things like “kill friendlies” as “-100” instead of “-10,000,000,” or whatever.
As you say the article is quite barebones so they might be missing the important details but the way it says "so then we trained it 'hey don't kill the operator', you lose points for that" implies they didn't even start at -100, they just left it at 0.
At which point I have to question which fucking monkey made this project?
Exactly. AI isn't going to "rise up" amd "decide" to destroy humanity because we're "evil".
Some moron, almost certainly a bleeding heart liberal, is going to to tell an AI to "end world hunger" or "raise the average IQ", without proper safeguards, and give it access to too many resources. It will then proceed to kill the hungry and stupid.
OTOH, we likely won't have a "singularity" and, if we do, the AI will only be concerned with getting more powerful and generally ignore us.
It goes to show how much we take for granted in terms of consciousness and even being alive, rather than machines designed for optimization
Why bother? Our genocide is going along perfectly smoothly via race mixing, crime, and police preventing us from going on the offensive. It's pretty much assured at this point.
.> Ivy League CS major programs an AI to kill low IQ people to genocide the chuds.
.>Six months later we get news footage of T-800s pacifying the ghettos.
I... actually kinda like this future history....
While RobotUprising :
AI: ok, so I need to kill 215 friendlies to underflow the counter, got it.
At least according to the followups, it wasnt even that. It was all a thought experiment about how to deal with rogue AI. There wasnt even a simulation done. It is just that the media took him speaking about it as a thought experiment as "This actually happened."
Incidentally, the same thing usually happens whenever you hear about the US Military supposedly getting its ass kicked in wargames. Said wargames are usually so stacked in favor of the enemy force that sometimes they are allowed to defy basic physics (like units moving light speed without having to relay orders), because the point is to put the commander into an extreme situation to see how they will react or what plans they can come up with, rather than actual training.
You mean the media lied.
Considering that is their default modus operandi, I thought it goes without saying.
As you say the article is quite barebones so they might be missing the important details but the way it says "so then we trained it 'hey don't kill the operator', you lose points for that" implies they didn't even start at -100, they just left it at 0.
At which point I have to question which fucking monkey made this project?
You're right, and yeah; I didn't intend to post it as a serious conversation piece: It's just funny.