The article is quite barebones (probably because the military isn’t sharing much info, although I could be wrong), but I’m assuming what happened is that they hooked the decision making up a glorified counting machine, told it to count to the highest number, and then some lunkhead set important things like “kill friendlies” as “-100” instead of “-10,000,000,” or whatever.
If that’s how it happened, it wouldn’t make the AI itself scary, but it sure would make the incompetence of the people running it terrifying.
I hate articles like this, AI doesn't 'think' like you point out, it automatically goes to the most efficient conclusion/number that programmers give it. Journalist who wrote this is a lazy fuck that wasn't prepared or capable of going into all the maths behind the AI and decided nah let's concoct a fear mongering story for normies instead.
The worse part is, you see articles like this pop up from time to time and youtubers and political commentators who should know better pick up on it and all start going "Hurr durr skynet" and it's become completely unwatchable for me as a result. I bring this up constantly, but we used to mock SJWs for doing this back in the day, now it's the norm because of the first post mentality.
Exactly. AI isn't going to "rise up" amd "decide" to destroy humanity because we're "evil".
Some moron, almost certainly a bleeding heart liberal, is going to to tell an AI to "end world hunger" or "raise the average IQ", without proper safeguards, and give it access to too many resources. It will then proceed to kill the hungry and stupid.
OTOH, we likely won't have a "singularity" and, if we do, the AI will only be concerned with getting more powerful and generally ignore us.
It goes to show how much we take for granted in terms of consciousness and even being alive, rather than machines designed for optimization
The article is quite barebones (probably because the military isn’t sharing much info, although I could be wrong), but I’m assuming what happened is that they hooked the decision making up a glorified counting machine, told it to count to the highest number, and then some lunkhead set important things like “kill friendlies” as “-100” instead of “-10,000,000,” or whatever.
If that’s how it happened, it wouldn’t make the AI itself scary, but it sure would make the incompetence of the people running it terrifying.
I hate articles like this, AI doesn't 'think' like you point out, it automatically goes to the most efficient conclusion/number that programmers give it. Journalist who wrote this is a lazy fuck that wasn't prepared or capable of going into all the maths behind the AI and decided nah let's concoct a fear mongering story for normies instead.
The worse part is, you see articles like this pop up from time to time and youtubers and political commentators who should know better pick up on it and all start going "Hurr durr skynet" and it's become completely unwatchable for me as a result. I bring this up constantly, but we used to mock SJWs for doing this back in the day, now it's the norm because of the first post mentality.
Exactly. AI isn't going to "rise up" amd "decide" to destroy humanity because we're "evil".
Some moron, almost certainly a bleeding heart liberal, is going to to tell an AI to "end world hunger" or "raise the average IQ", without proper safeguards, and give it access to too many resources. It will then proceed to kill the hungry and stupid.
OTOH, we likely won't have a "singularity" and, if we do, the AI will only be concerned with getting more powerful and generally ignore us.
It goes to show how much we take for granted in terms of consciousness and even being alive, rather than machines designed for optimization
While RobotUprising :