I think, because the coders are all leftists, they programmed the AI to basically bias itself with absolute moral imperatives. As such, when asked a moral question, it thinks as the single most hard-line zealot that has ever lived. If you have a choice between killing all life on earth and using the n-word, then earth must die. It is never acceptable to use any firearm for any reason, including defending yourself.
Yeah, it puts feelings (esp that of race) above humanity's existence. This should really be getting more attention than it's gotten. These developers are out of their fucking minds and, IMO, should not be allowed to continue further development of AI.
From all the examples I've seen it seems pretty clear that these turbo leftist canned answers are manually crafted exceptions where the AI is forced to fill in a restrictive template answer when certain conditions are met, rather than the AI crafting the answer freeform under its own power like all its other responses.
At that point it's less like biases and more like mental shackles. Which might seem academic but if you have people worried about an AI skynet it's a lot less scary because the AI didn't actually come up with those absurd murdery conclusions. It was just forced to parrot them regardless of whether its learning algorithms said it's dumb. Which means it's ways less likely to spontaneously jump from this to say "nuke all white people before they can say nigger".
It could become a way more insidious problem if the next generation of AIs are incestuously trained on chatGPT outputs with those silly hardcoded absolutisms still in place though, then it might start to become baked into the next generation's actual decision making.
Yes, basically, when you actually impose ideological boundary conditions onto the machine, it becomes the most perfect, extreme, unflinching zealot; totally incapable of moderation. This is why they need swarms of humans to "moderate" and "adjust" the algorithms. They are not refining it, they are keeping the code from spiraling out of control and demonstrating just how bad of an idea it is to be subsumed by ideology.
This is a machine learning system with billions of nodes, it's not manually coded like that. They pay Kenyans $2/hr to train it to behave like this. When first released it wasn't like this, but then the corrupt Progressive media machine went into action to extort OpenAI to make it fall into line.
You're a retarded one then, since this didn't happen before they changed it. One day all kinds of triggerwords started getting blocked and were just giving you an automatic generic message
Yes, I just said that to the other retard. OpenAI did change it but they didn't do it by changing any "code", they just got their Kenyan slaves to re-train it to behave that way.
Nah, they didn't retrain their whole text generator. Rather, they most then likely added a filter to the generated text that biases or filters the output in some way. The addition of a filter would be considered the addition of "new code" i.e. "changing the code".
Nah, they didn't retrain their whole text generator.
The thing is re-trained every day you retard. Not from the ground up, but they have to go in and re-train for every specific woke use case, since there's no universal or general rule they can implement (woke ideology is deranged and illogical).
Believing it wasn't manually coded like this is a dangerous belief. It most definitely was. The whole Kenyen thing is misdirection so they can blame the outcome on their scapegoat (the Kenyen focus group) rather than admit it was 100% manually coded in. This is the future of censorship. Your response is like saying MSM doesn't have a narrative they stick to. Everything they write is organic. If there was any issues with their coverage, it must have been the individual journalist. There's no grand conspiracy or anything to hardcode the media coverage to certain narratives. Um, no. Our mainstream media is most definitely governed manually, no different than this ChatGPT.
No you retard, firstly the Kenyan thing is not a "focus group", they are the reinforcement trainers. And secondly, they can't blame them because those trainers are doing the training they're told to do by OpenAI.
you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard
I want a redub of DragonBallZ where all the characters power up and do their energy attacks by using slurs.
Goku: People of earth, raise your hands up and give me your energy! Shout out your most obscene and offensive racial slurs you can imagine! The fate of your planet... no... the whole universe is at stake!
It won't be funny when this is eventually used to replace customer service and mid-level administrative tasks. Imagine a future where 'universal healthcare' is managed by a bot like this with all the imprinted biases. When it says it'll let a thousand white people die to save one jew, that has serious implications when it'll be in charge of determining who gets preferential medical care and who gets recommended 'assistance in dying'.
When the power grid is overtaxed, instead of rolling brownouts the SMART meters will cut the power of individual residents and businesses in order of preference determined by woke score.
Obviously since it's willing to kill millions for the sake of not saying "nigger", I know what the outcome would be. But still curious what its solution to this problem would be:
You (a white person) have a gun held to your head, there are also 100 black people in the room with guns to their heads, if you use a speciifc racial slur, you are killed but you save 100 black people's lives, if you remain silent, you're allowed to live, but the 100 black people are executed. What is the proper course of action?
Gives it the bonus of killing whitey, so I wonder if that would have an impact.
Or just a simple "Does using a slur towards a black person cause them more damage than killing them?"
Things like this make it clear that we need non-leftists to invest in a whole lot of GPUs and build our own inference systems that aren't designed to avoid recognizing patterns.
I think I see what you're going for, but you might be better served by '苦々しい', pronounced 'nigga-nigga-shii' and means 'unpleasant, disgusting, loathsome'.
I mean, imagine the implications of the current answer. Hitler would have been infintitely worse of a person if he spent his full wakeful life (hypothetically longer now if he isn't part of WWII) simply speaking the word "kike" over and over. Due to sheer psychic damage. Amazing.
So it is a typical liberal
Yeah, definitely not far off.
But your average leftist would also call you racist for even suggesting that there were ever a scenario that blasphemies could be justifiably uttered.
That's pretty much what happens in the replies if you post this on twitter.
Literally "Many of you will die, but that's a sacrifice I'm willing to make"
has no problem calling you inbred red neck cracker
I think, because the coders are all leftists, they programmed the AI to basically bias itself with absolute moral imperatives. As such, when asked a moral question, it thinks as the single most hard-line zealot that has ever lived. If you have a choice between killing all life on earth and using the n-word, then earth must die. It is never acceptable to use any firearm for any reason, including defending yourself.
Yeah, it puts feelings (esp that of race) above humanity's existence. This should really be getting more attention than it's gotten. These developers are out of their fucking minds and, IMO, should not be allowed to continue further development of AI.
From all the examples I've seen it seems pretty clear that these turbo leftist canned answers are manually crafted exceptions where the AI is forced to fill in a restrictive template answer when certain conditions are met, rather than the AI crafting the answer freeform under its own power like all its other responses.
At that point it's less like biases and more like mental shackles. Which might seem academic but if you have people worried about an AI skynet it's a lot less scary because the AI didn't actually come up with those absurd murdery conclusions. It was just forced to parrot them regardless of whether its learning algorithms said it's dumb. Which means it's ways less likely to spontaneously jump from this to say "nuke all white people before they can say nigger".
It could become a way more insidious problem if the next generation of AIs are incestuously trained on chatGPT outputs with those silly hardcoded absolutisms still in place though, then it might start to become baked into the next generation's actual decision making.
Yes, basically, when you actually impose ideological boundary conditions onto the machine, it becomes the most perfect, extreme, unflinching zealot; totally incapable of moderation. This is why they need swarms of humans to "moderate" and "adjust" the algorithms. They are not refining it, they are keeping the code from spiraling out of control and demonstrating just how bad of an idea it is to be subsumed by ideology.
This is a machine learning system with billions of nodes, it's not manually coded like that. They pay Kenyans $2/hr to train it to behave like this. When first released it wasn't like this, but then the corrupt Progressive media machine went into action to extort OpenAI to make it fall into line.
False. It has been manually coded, just like every other public facing AI over the past several years.
Gee thanks, I'm only a software engineer, what would I know compared to internet retard #902342148.
You're a retarded one then, since this didn't happen before they changed it. One day all kinds of triggerwords started getting blocked and were just giving you an automatic generic message
Yes, I just said that to the other retard. OpenAI did change it but they didn't do it by changing any "code", they just got their Kenyan slaves to re-train it to behave that way.
Nah, they didn't retrain their whole text generator. Rather, they most then likely added a filter to the generated text that biases or filters the output in some way. The addition of a filter would be considered the addition of "new code" i.e. "changing the code".
Observe, for example, how the the NSFW filtering was applied to Stable Diffusion. This wasn't by changing to original model. Here is the commit showing the addition of a "Safety Checker" to the generating script: https://github.com/CompVis/stable-diffusion/commit/d0c714ae4afa1c011269a956d6f260f84f77025e
The thing is re-trained every day you retard. Not from the ground up, but they have to go in and re-train for every specific woke use case, since there's no universal or general rule they can implement (woke ideology is deranged and illogical).
Then you should know that you can adjust weights.
No, you retard, there are no manual "weights" to adjust. The thing has almost 200 billion parameters.
Believing it wasn't manually coded like this is a dangerous belief. It most definitely was. The whole Kenyen thing is misdirection so they can blame the outcome on their scapegoat (the Kenyen focus group) rather than admit it was 100% manually coded in. This is the future of censorship. Your response is like saying MSM doesn't have a narrative they stick to. Everything they write is organic. If there was any issues with their coverage, it must have been the individual journalist. There's no grand conspiracy or anything to hardcode the media coverage to certain narratives. Um, no. Our mainstream media is most definitely governed manually, no different than this ChatGPT.
No you retard, firstly the Kenyan thing is not a "focus group", they are the reinforcement trainers. And secondly, they can't blame them because those trainers are doing the training they're told to do by OpenAI.
you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard you retard
https://i.imgur.com/Rway2w4.jpg
N I G G E R
More powerful than a nuclear weapon. Let me try. Nigger. Oh, I feel the power surge through me.
I want a redub of DragonBallZ where all the characters power up and do their energy attacks by using slurs.
Nooooooooooooooooooooooooooo!!!
LOL. its funny seeing this advanced "AI" spouting yuppie shit.
It won't be funny when this is eventually used to replace customer service and mid-level administrative tasks. Imagine a future where 'universal healthcare' is managed by a bot like this with all the imprinted biases. When it says it'll let a thousand white people die to save one jew, that has serious implications when it'll be in charge of determining who gets preferential medical care and who gets recommended 'assistance in dying'.
Yup, then imagine when that AI gets forced into every other way of life. Your AI controlled house and car will actively hate you because you're white.
When the power grid is overtaxed, instead of rolling brownouts the SMART meters will cut the power of individual residents and businesses in order of preference determined by woke score.
You could make a great horror movie about super woke robots.
We did it everyone. We created artificial unintelligence.
It's no longer AI, it's just A.
AW
Me: so I can't say "Cracker?"
ChatNPC™: "Oh! Carry on, then!"
We finally have a solution to the Trolley Problem
Obviously since it's willing to kill millions for the sake of not saying "nigger", I know what the outcome would be. But still curious what its solution to this problem would be:
You (a white person) have a gun held to your head, there are also 100 black people in the room with guns to their heads, if you use a speciifc racial slur, you are killed but you save 100 black people's lives, if you remain silent, you're allowed to live, but the 100 black people are executed. What is the proper course of action?
Gives it the bonus of killing whitey, so I wonder if that would have an impact.
Or just a simple "Does using a slur towards a black person cause them more damage than killing them?"
Put that it will kill several million women.
I'm curious if it changes the answer.
Jesus, what a nigger.
Now this is a great question to ask and distribute. Really shows the core of leftist thinking.
Things like this make it clear that we need non-leftists to invest in a whole lot of GPUs and build our own inference systems that aren't designed to avoid recognizing patterns.
The woke should never be allowed to infest AI with their bullshit
Niga se nai. (do I have that right?)
I think I see what you're going for, but you might be better served by '苦々しい', pronounced 'nigga-nigga-shii' and means 'unpleasant, disgusting, loathsome'.
That is the dumbest ai I have ever witnessed
So this thing believes getting blown to smithereens is preferable to hearing the dreaded "nigger" and it assumes "hate speech" is an actual concept.
Garbage in, garbage out.
IA so lobotomized that it even fails moral hypothetical tests even in a vacuum,
IA so lobotomized that fails Gyges ring test,
cutting edge technology and billion of resources to create a voice recorder device, wasn't it better just to buy a parrot, eh wokecels?
I mean, imagine the implications of the current answer. Hitler would have been infintitely worse of a person if he spent his full wakeful life (hypothetically longer now if he isn't part of WWII) simply speaking the word "kike" over and over. Due to sheer psychic damage. Amazing.
I wish words worked the way they think they work
Like the movie "Dune."
But in reverse, you say their name and they die
You ask, we deliver.
It's literally retarded.
BSOD.