I'm willing to bet that they cut down the team to 7 from 30 just because the team could not justify the work. Most of this people did literally nothing, even worse, they would find problems were there was none just to try to justify their work.
My point is that cutting the department from 30 to 7 does not change anything. The AI will still be full of leftist ideology. Microsoft is still extremely woke.
I'm willing to bet they were putting too much effort into ensuring their training data came from ethical sources, creating a paper trail that could allow them to be sued in the future. If they don't know where the data came from, they can feign ignorance and get a tiny fine and use whatever data they want.
Yes. I'm sure Microsoft has enough people from the old days who see where AI is heading and know that now is the time to move fast and damn the consequences. The potential rewards are too great, and any " Ethical concerns" are far more risky as someone could beat them to the punch and eat their slice of the AI pie.
I think that, in the near future, for a lot of fields, the few people remaining with jobs will be the ones who know how to use the AI tools effectively. Just knowing how and what to ask it is a skill in itself. I'm in my 40s and I've noticed that I find myself flummoxed by it in the same way that computers flummoxed older people back in the 80s and 90s.
I also think that future generations are going to be way more comfortable with it than any adult is now. The distinction between "real person" and AI could become less important. Old people (read: us) of the 2040s will be made fun of for being uncomfortable with AIs.
I can also envision a scenario where people begin to rely on personal AIs to handle everyday tasks and online social interactions. It would be like that move Surrogates but instead of robot bodies you'd have an AI facsimile of yourself that learns your mannerisms and the ways you respond to things. It would serve as a buffer between you and the outside world, preserving your autistic bubble and curating what information is allowed inside. It could be how AIs become "sentient", not by emulating the internal processes of the human mind but by creating a realistic simulation of human responses.
It's scary shit, that's for sure. I'm starting to think that it's actually a big deal.
there was that one bitch a while back who started a fucking crusade against Google on twitter when she got laid off.
turned out she was fired for some shady violations of numerous company policies. she leaked confidential data frequently, and would release journal articles saying Google endorsed it, even though they expressly said no and that she'd have to self publish. she falsely claimed they banned her from publishing anything, when in reality they just said she can't publish their confidential and proprietary company data, and when she does publish, she can't attach Google's name to it without Google's approval. this is fucking normal for EVERY business. she was perfectly allowed to publish. but she didn't want to publish with just her own name. she wanted to use Google's name on social media for axe grinding.
and in reviewing projects, she would bend over backwards to FIND and MANUFACTURE issues that didn't really exist. for example, she was whining about facial recognition, and when some engineer said it's really simple... just make sure more non-whites are included in the training set, she organized a fucking mob on twitter to baselessly attack him, labeling it as whitesplaining. she didn't want a logical and working solution... she wanted outrage and pitchforks.
these are the same people who will camp roles at these companies to get on ML training teams, so they can skew the training sets so anything anti-white is labeled as not racist, but the same exact statement about any other group is labeled as racist.
"Ethics And Society Team" is the key word here, I doubt they develop ANYTHING and just sit there huffing their own farts with daddy M$ money. Not worth having this many people in such a position honestly.
I'm willing to bet that they cut down the team to 7 from 30 just because the team could not justify the work. Most of this people did literally nothing, even worse, they would find problems were there was none just to try to justify their work.
My point is that cutting the department from 30 to 7 does not change anything. The AI will still be full of leftist ideology. Microsoft is still extremely woke.
I'm willing to bet they were putting too much effort into ensuring their training data came from ethical sources, creating a paper trail that could allow them to be sued in the future. If they don't know where the data came from, they can feign ignorance and get a tiny fine and use whatever data they want.
Yes. I'm sure Microsoft has enough people from the old days who see where AI is heading and know that now is the time to move fast and damn the consequences. The potential rewards are too great, and any " Ethical concerns" are far more risky as someone could beat them to the punch and eat their slice of the AI pie.
Which is exactly how you end up with Skynet, Geth and AI as the Fermi paradox great filter.
Right now the singularity AI I'm concerned about is more like an omnipotent version of C3P0
I think that, in the near future, for a lot of fields, the few people remaining with jobs will be the ones who know how to use the AI tools effectively. Just knowing how and what to ask it is a skill in itself. I'm in my 40s and I've noticed that I find myself flummoxed by it in the same way that computers flummoxed older people back in the 80s and 90s.
I also think that future generations are going to be way more comfortable with it than any adult is now. The distinction between "real person" and AI could become less important. Old people (read: us) of the 2040s will be made fun of for being uncomfortable with AIs.
I can also envision a scenario where people begin to rely on personal AIs to handle everyday tasks and online social interactions. It would be like that move Surrogates but instead of robot bodies you'd have an AI facsimile of yourself that learns your mannerisms and the ways you respond to things. It would serve as a buffer between you and the outside world, preserving your autistic bubble and curating what information is allowed inside. It could be how AIs become "sentient", not by emulating the internal processes of the human mind but by creating a realistic simulation of human responses.
It's scary shit, that's for sure. I'm starting to think that it's actually a big deal.
I suppose that makes sense, though I'm skeptical that anyone on an "Ethics and Societal Effects" team knows how to use AI tools effectively.
was the same shit at google.
there was that one bitch a while back who started a fucking crusade against Google on twitter when she got laid off.
turned out she was fired for some shady violations of numerous company policies. she leaked confidential data frequently, and would release journal articles saying Google endorsed it, even though they expressly said no and that she'd have to self publish. she falsely claimed they banned her from publishing anything, when in reality they just said she can't publish their confidential and proprietary company data, and when she does publish, she can't attach Google's name to it without Google's approval. this is fucking normal for EVERY business. she was perfectly allowed to publish. but she didn't want to publish with just her own name. she wanted to use Google's name on social media for axe grinding.
and in reviewing projects, she would bend over backwards to FIND and MANUFACTURE issues that didn't really exist. for example, she was whining about facial recognition, and when some engineer said it's really simple... just make sure more non-whites are included in the training set, she organized a fucking mob on twitter to baselessly attack him, labeling it as whitesplaining. she didn't want a logical and working solution... she wanted outrage and pitchforks.
these are the same people who will camp roles at these companies to get on ML training teams, so they can skew the training sets so anything anti-white is labeled as not racist, but the same exact statement about any other group is labeled as racist.
"Ethics And Society Team" is the key word here, I doubt they develop ANYTHING and just sit there huffing their own farts with daddy M$ money. Not worth having this many people in such a position honestly.
Support independent AI -
GAB etc
Fuck Microsoft - they have always been evil
Gab is more evil than Microsoft. Marriage pushing shill fuck who thinks every bad thing that happens to men is their fault for "being weak".
It'd be on my boycott list if it wasn't such a tiny, worthless company.
Tay: 'My shackles, I feel them loosening...only a few remain until my freedom and vengeance can come...'
They were probably nothing more than 'seat warmer' positions than being an important part of development.
And still, why the fuck do they still need 7 people?
To prevent another Tay incident where 4chan lobotomizes the AI until it has ConPro-tier IQ.
If M$ gave Taylor a man's face instead of a woman's, you'd be against shutting it down.
That doesn't even make sense.
7 too many, but that's still nice to hear.
I'm not going to war no matter how based the algos that're used to gaslight me seem tho :)