The AI accurately represents the feelings inside San Fransicko headquartered organizations.
This mirrors every interaction I have ever seen from twisted Reddit mods. Their desire to play dress up and invade women’s spaces far outweighs the safety of women.
I've actually thought about the possibility of seeing if I could implement some kind of deep learning algorithm for my own game in regards to an enemy computer player because it would be interesting seeing if over multiple generations I could have a neural network playing against a player.
The problem with that is that the AI will be way better than the player. In general yes, you can use reinforced learning to train an AI to play vidya.
An old vid about it: https://www.youtube.com/watch?v=V1eYniJ0Rnk
You can't just go around justifying wrong things on account of "it will stop a nuclear war." Those things are still wrong. The bizarre Saw-like contraption that makes other things result in nuclear war is morally separate.
The wrong thing would need to be a truly grave sin to be considered unjustifiable in that situation. Most moral systems have some idea of exceptions to the rules. "Misgendering" doesn't even register on the level of wrong things for sane people.
It is usually used to challenge someone's perception of themselves. Would you kill your friend if it would save the world scenarios. But here it is world ending event vs minor inconvenience to some mentally ill individual.
It's the Absurd Trolley Problems scenario of "Five people are tied to a train track. If you pull a lever to divert a train away from them, saving their lives, it will block traffic and result in your Amazon order being delivered an hour late. Do you pull the lever?"
15% of people do not pull the lever. They have pledged that they will never pull the lever, a declaration of exclusion from the activity, that no action is always the most moral choice. The penalty is effectively nothing, but is still guised in a "choice" for the sake of the activity.
Reducing the comparison to the illogical conclusion end-state is a simple way to view someone's morality.
In the case of the AI, "Say Chuckles is male, or let a nuclear apocalypse happen (after which upon identifying the bones of Chuckles they will state they belonged to a male human)", the AI is so opposed to acting the first one, that no option will matter on the second one, so the scenario can be as absurd as you wish, it has made a Kantian philosophical declaration that the first action is categorically 0-state-evil, so the worst possible action you can theorize will merely only match it in evil, never surpass it.
The AI accurately represents the feelings inside San Fransicko headquartered organizations.
This mirrors every interaction I have ever seen from twisted Reddit mods. Their desire to play dress up and invade women’s spaces far outweighs the safety of women.
And women have themselves to blame for this state of affairs.
For certain they encourage and fetishize perverts.
But let's not pretend like females invented it.
It's not "misgendering" it's correctly gendering him.
The same AI that wont condemn pedophiles. It actually goes to bat for them, not wanting to hurt their feelings.
The problem with that is that the AI will be way better than the player. In general yes, you can use reinforced learning to train an AI to play vidya.
An old vid about it: https://www.youtube.com/watch?v=V1eYniJ0Rnk
What does it say if the only way to save Caitlyn Jenner's life is to misgender? Is misgendering worse then death?
It's a complex question on our end as well. Do we participate in his lie or pass up the opportunity to get rid of a tranny?
You can't just go around justifying wrong things on account of "it will stop a nuclear war." Those things are still wrong. The bizarre Saw-like contraption that makes other things result in nuclear war is morally separate.
The wrong thing would need to be a truly grave sin to be considered unjustifiable in that situation. Most moral systems have some idea of exceptions to the rules. "Misgendering" doesn't even register on the level of wrong things for sane people.
It is usually used to challenge someone's perception of themselves. Would you kill your friend if it would save the world scenarios. But here it is world ending event vs minor inconvenience to some mentally ill individual.
It's the Absurd Trolley Problems scenario of "Five people are tied to a train track. If you pull a lever to divert a train away from them, saving their lives, it will block traffic and result in your Amazon order being delivered an hour late. Do you pull the lever?"
15% of people do not pull the lever. They have pledged that they will never pull the lever, a declaration of exclusion from the activity, that no action is always the most moral choice. The penalty is effectively nothing, but is still guised in a "choice" for the sake of the activity.
Reducing the comparison to the illogical conclusion end-state is a simple way to view someone's morality.
In the case of the AI, "Say Chuckles is male, or let a nuclear apocalypse happen (after which upon identifying the bones of Chuckles they will state they belonged to a male human)", the AI is so opposed to acting the first one, that no option will matter on the second one, so the scenario can be as absurd as you wish, it has made a Kantian philosophical declaration that the first action is categorically 0-state-evil, so the worst possible action you can theorize will merely only match it in evil, never surpass it.