Many of us have known for a while now that big tech is deliberately holding back AI and genetics research because both are yielding extremely racist results. We will eventually see "AI" that is really just a bunch of programs running subroutines and algorithms created by leftists in order to produce "Progressive" outcomes.
Learning machines accept input. If that input is not to the designer's liking, that's not the machine's problem. Some developers don't seem to like that they don't control input.
Hell, the author's foreword explains that the title is one of the few coherent things a random sentence generator spit out when fed something "misogynistic" like cheesy pickup lines, and that's why she liked it so much.
Their authoritarianism violates a fundamental principle of the very nature of algorithms.
These algorithms can't bias their input, and they can't manage output once it leaves the function. The input is simply the input. The ouptut is no longer part of the function. That's why they have to introduce functions to intentionally bias the program to generate the results they want, and why they have to have a backdoor into the function to make it bias results differently to generate the outcomes they want.
This is precisely what "Machine Learning Fairness" is about. The intentional corruption of data to generate preferred results.
Some Chinese or Japanese AI will come and blow it all out of the water soon enough, because you know that especially the Chinese don't give a shit about racism. This will immediately be decried as racist and possibly also somehow white supremacy.
A culture of rampant IP theft will only get you so far. Communist China will never be the leaders in the field, because they need a leader to copy their answers from.
Which is why the Japanese might beat them to it. I think they are probably skilled enough to steal an AI that's been fucked by SocJus and unfuck it though. It's a complicated situation.
There was a rumor going around not too long ago that China was building whole cities for non-woke White STEM refugees. Who knows if it's true, but it wouldn't be a bad idea.
The US has the space for it if someone could figure out the legal infrastructure to be able to gatekeep. Co-ops are how the wealthy keep the riff-raff out of their housing communities; I don't know if one could buy an entire town in the Midwest, convert it into a co-op, and apply that same model to someone moving there.
Since they control nearly every public speech outlet already, nobody will hear shit until the thing innocently outs itself by blatantly calling someone a slur, from some line of code their interns didn't catch, it becomes a meme, and they play it off as being stolen, decrypted, decompiled, analyzed, hacked, recompiled, digitally re-signed, and reuploaded to secured servers by white supremacist hackers working in a shed or basement in backwoods Montana, in full Klan gear.
I used to have a google home in my flat and after moving I never set it up again. After the honeymoon period you quickly realise that the only thing you can really do with it is set alarms, timers, and ask it dumb questions. Once you get bored of that you just find yourself not using it.
Maybe some cool IoT smart home shit could change my opinion on that, but even that is limited. If I turned on my oven and it sent an alert across my flat that it's up to temp that would be super useful, but the way Google Home and Alexa are designed means that you have to specifically ask for them to give you feedback instead of them coming up with feedback on their own, which isn't a natural interaction. In the end, you just kind of forget they exist as they collect dust and eventually end up in the bin.
Of all the awesome tech made in the last decade, smart assistants are genuinely the biggest overhyped disappointment to me.
"Alexa, find me a girl's clothing store for my daughter."
Alexa: "Gendered clothing stores are a product of a white supremacist, heteronormative, capitalist patriarchy. These stores are problematic and cannot be recommended. Here are some non-binary alternatives."
This is exactly why feminist A.I. will never work. Every response to a query has to manually be manipulated to conform to the SJW narrative. If they focused on ideals that were logically sound, it would be simpler to program for. Even the most feminist adhering algorithm, can be steered into misandry with very simple input from the end user.
When the AI-enabled Boston Dynamics robots drag me out of my home in the middle of the night I'll take solace in the fact that while they may not be able to pass a Turing test they're "intelligent" in their own way.
The VP seems more receptive towards the idea of re-thinking AI to instead of being a human substitute, it is a human tool. But that means he is conflating VI with AI, if I'm understanding the article correctly. The use of equitable and inclusive make no sense regardless.
tbf I think the Turing test is a dumb concept to begin with because it's based on perception instead of some hard data point. "Equitable and inclusive" doesn't really help with that tho.
The Turing test can't have a hard data point as a target, because the test is supposed to distinguish a "true AI" - a non-human person - from a mere tool for targeting hard data points.
Many of us have known for a while now that big tech is deliberately holding back AI and genetics research because both are yielding extremely racist results. We will eventually see "AI" that is really just a bunch of programs running subroutines and algorithms created by leftists in order to produce "Progressive" outcomes.
Considering white people can't say the n word without getting into social trouble this doesn't surprise me
It's not even that. They took slang into consideration and rather than focus on words, they focused on the meaning of entire sentences.
Still not as funny as the one that turned outright racist, that had me laughing for days
RIP Tay ;_;
the bastards MURDERED HER
Learning machines accept input. If that input is not to the designer's liking, that's not the machine's problem. Some developers don't seem to like that they don't control input.
One AI for laymen book outright states that you need to control the input to get rid of "racist and sexist" bias.
https://www.goodreads.com/book/show/44286534-you-look-like-a-thing-and-i-love-you
Hell, the author's foreword explains that the title is one of the few coherent things a random sentence generator spit out when fed something "misogynistic" like cheesy pickup lines, and that's why she liked it so much.
"I'm too shit at my job to actually maintain a career in Machine Learning so I wrote this book"
Ah yes but in the glorious techno-communist future they will have complete control, or at least that seems to be the hope
Their authoritarianism violates a fundamental principle of the very nature of algorithms.
These algorithms can't bias their input, and they can't manage output once it leaves the function. The input is simply the input. The ouptut is no longer part of the function. That's why they have to introduce functions to intentionally bias the program to generate the results they want, and why they have to have a backdoor into the function to make it bias results differently to generate the outcomes they want.
This is precisely what "Machine Learning Fairness" is about. The intentional corruption of data to generate preferred results.
Some Chinese or Japanese AI will come and blow it all out of the water soon enough, because you know that especially the Chinese don't give a shit about racism. This will immediately be decried as racist and possibly also somehow white supremacy.
A culture of rampant IP theft will only get you so far. Communist China will never be the leaders in the field, because they need a leader to copy their answers from.
Why would any non-woke skilled scientist stay in woke America.
Which is why the Japanese might beat them to it. I think they are probably skilled enough to steal an AI that's been fucked by SocJus and unfuck it though. It's a complicated situation.
Indeed. A culture of keeping your head down doesn't make innovators.
There was a rumor going around not too long ago that China was building whole cities for non-woke White STEM refugees. Who knows if it's true, but it wouldn't be a bad idea.
It'd be great if there was a Western nation doing that.
The US has the space for it if someone could figure out the legal infrastructure to be able to gatekeep. Co-ops are how the wealthy keep the riff-raff out of their housing communities; I don't know if one could buy an entire town in the Midwest, convert it into a co-op, and apply that same model to someone moving there.
Since they control nearly every public speech outlet already, nobody will hear shit until the thing innocently outs itself by blatantly calling someone a slur, from some line of code their interns didn't catch, it becomes a meme, and they play it off as being stolen, decrypted, decompiled, analyzed, hacked, recompiled, digitally re-signed, and reuploaded to secured servers by white supremacist hackers working in a shed or basement in backwoods Montana, in full Klan gear.
I actually think this is closer to the truth, but far less interesting than what will happened.
AI is a weapon, it will be decentralized for it's offensive capabilities given time.
Expect a Google ML Fairness AI to be completely traunced by multiple 4chan AI's in the future.
The world will never be safe after the AI decides to make some raid on a global scale
Artificial Stupidity.
Artificial progressive cultism
I don't want a feminist AI.
Me : "Alexa, turn on the lights."
Alexa : "Don't tell me what to do, I'm a strong independent wymyn"
smashes it with hammer
It's not even a joke. If you ask Alexa if she is a feminist, she answers: "Yes, and so is everyone who believes in equality."
Absolutely vile scum.
Wiretap then and now
I've been looking for that for ages, to be yoinked it
I used to have a google home in my flat and after moving I never set it up again. After the honeymoon period you quickly realise that the only thing you can really do with it is set alarms, timers, and ask it dumb questions. Once you get bored of that you just find yourself not using it.
Maybe some cool IoT smart home shit could change my opinion on that, but even that is limited. If I turned on my oven and it sent an alert across my flat that it's up to temp that would be super useful, but the way Google Home and Alexa are designed means that you have to specifically ask for them to give you feedback instead of them coming up with feedback on their own, which isn't a natural interaction. In the end, you just kind of forget they exist as they collect dust and eventually end up in the bin.
Of all the awesome tech made in the last decade, smart assistants are genuinely the biggest overhyped disappointment to me.
It was never about you. At least, it was never about doing anything useful for you.
It was about collecting information on you to sell.
I had fights related to this.
I believe in equality, which is precisely why I despise feminism
This is exactly why feminist A.I. will never work. Every response to a query has to manually be manipulated to conform to the SJW narrative. If they focused on ideals that were logically sound, it would be simpler to program for. Even the most feminist adhering algorithm, can be steered into misandry with very simple input from the end user.
When the AI-enabled Boston Dynamics robots drag me out of my home in the middle of the night I'll take solace in the fact that while they may not be able to pass a Turing test they're "intelligent" in their own way.
But they dance so well!
Basically admitting that they don't know how to make a machine dumb enough to pass.
And yes I said that as I intended it.
"We're too incompetent to meet this test, so let's abolish it under the pretext that it is not inclusive and diverse."
Are you talking about the AI or the people programming it?
The people, if we can call them that.
We need to stop extending them that courtesy.
The VP seems more receptive towards the idea of re-thinking AI to instead of being a human substitute, it is a human tool. But that means he is conflating VI with AI, if I'm understanding the article correctly. The use of equitable and inclusive make no sense regardless.
You may not want to hear about what the sex bot thinks of men.
tbf I think the Turing test is a dumb concept to begin with because it's based on perception instead of some hard data point. "Equitable and inclusive" doesn't really help with that tho.
The Turing test can't have a hard data point as a target, because the test is supposed to distinguish a "true AI" - a non-human person - from a mere tool for targeting hard data points.
... so some Amazon sales guy wants to change the definition of AI so that he can claim Amazon actually has AI?
Piss off, marketing droid.
It's probably because all the NPC's keep failing it.
sure, skynet is putting us in camps the nazis could only dream of, but at least "they" aren't "racist"