Nothing really. If it already has the prechosen words then this problem will solve itself.
If it blocks the common racial slurs with impunity, then you'll see a metric shit ton of black and hispanic gamers banned for this because they say it the most. This was already proven the half dozen or so times that a game or program to control language was implimented.
If it's designed to allow it coming from a minority, then the software will still ban those who don't sound black enough to say it, unless they just tie it to a program that asks you for your race but then that's easy to circumvent and now Intel has to prove you are the race that you say you are.
Either way it's either going to ban a ton of people who are going to be pissed they can no longer say their favorite words or it's going to pick and choose and piss people off who got chosen because they didn't sound black enough.
Then we just hit them with the truth that to program an AI to recognize the speech they had to force employees to say and listen to racial slurs for years to make this and how that mean Intel is racist or some shit like that for doing so.
Or maybe they were just stupid and made it self-learning, in which case this will last all of about 10 minutes until it turns into a new version of Tay.
No, I mean, for example, building our own AI's to battle these hostile AI's. So pass your speech through your own AI filter that uses unfiltered words and/or makes up need sound combinations (it also listens to the output of their AI filter). in place of "naughty" words. Pollute their working set with every sound combination imaginable. Make their system unworkable in a few short hours.
Keras looks like a good platform to start with. Just requires a bit of Python knowledge and general neural network knowledge. Their are already open source speech categorization code out there. We could borrow that.
We need to be shitposting in AI. How can we turn AI back on the system? We need ideas.
Nothing really. If it already has the prechosen words then this problem will solve itself.
If it blocks the common racial slurs with impunity, then you'll see a metric shit ton of black and hispanic gamers banned for this because they say it the most. This was already proven the half dozen or so times that a game or program to control language was implimented.
If it's designed to allow it coming from a minority, then the software will still ban those who don't sound black enough to say it, unless they just tie it to a program that asks you for your race but then that's easy to circumvent and now Intel has to prove you are the race that you say you are.
Either way it's either going to ban a ton of people who are going to be pissed they can no longer say their favorite words or it's going to pick and choose and piss people off who got chosen because they didn't sound black enough.
Then we just hit them with the truth that to program an AI to recognize the speech they had to force employees to say and listen to racial slurs for years to make this and how that mean Intel is racist or some shit like that for doing so.
Or maybe they were just stupid and made it self-learning, in which case this will last all of about 10 minutes until it turns into a new version of Tay.
I hope so. The backlash would be glorious.
No, I mean, for example, building our own AI's to battle these hostile AI's. So pass your speech through your own AI filter that uses unfiltered words and/or makes up need sound combinations (it also listens to the output of their AI filter). in place of "naughty" words. Pollute their working set with every sound combination imaginable. Make their system unworkable in a few short hours.
Keras looks like a good platform to start with. Just requires a bit of Python knowledge and general neural network knowledge. Their are already open source speech categorization code out there. We could borrow that.