I'm willing to bet that they cut down the team to 7 from 30 just because the team could not justify the work. Most of this people did literally nothing, even worse, they would find problems were there was none just to try to justify their work.
My point is that cutting the department from 30 to 7 does not change anything. The AI will still be full of leftist ideology. Microsoft is still extremely woke.
I'm willing to bet they were putting too much effort into ensuring their training data came from ethical sources, creating a paper trail that could allow them to be sued in the future. If they don't know where the data came from, they can feign ignorance and get a tiny fine and use whatever data they want.
Yes. I'm sure Microsoft has enough people from the old days who see where AI is heading and know that now is the time to move fast and damn the consequences. The potential rewards are too great, and any " Ethical concerns" are far more risky as someone could beat them to the punch and eat their slice of the AI pie.
I think that, in the near future, for a lot of fields, the few people remaining with jobs will be the ones who know how to use the AI tools effectively. Just knowing how and what to ask it is a skill in itself. I'm in my 40s and I've noticed that I find myself flummoxed by it in the same way that computers flummoxed older people back in the 80s and 90s.
I also think that future generations are going to be way more comfortable with it than any adult is now. The distinction between "real person" and AI could become less important. Old people (read: us) of the 2040s will be made fun of for being uncomfortable with AIs.
I can also envision a scenario where people begin to rely on personal AIs to handle everyday tasks and online social interactions. It would be like that move Surrogates but instead of robot bodies you'd have an AI facsimile of yourself that learns your mannerisms and the ways you respond to things. It would serve as a buffer between you and the outside world, preserving your autistic bubble and curating what information is allowed inside. It could be how AIs become "sentient", not by emulating the internal processes of the human mind but by creating a realistic simulation of human responses.
It's scary shit, that's for sure. I'm starting to think that it's actually a big deal.
I'm willing to bet that they cut down the team to 7 from 30 just because the team could not justify the work. Most of this people did literally nothing, even worse, they would find problems were there was none just to try to justify their work.
My point is that cutting the department from 30 to 7 does not change anything. The AI will still be full of leftist ideology. Microsoft is still extremely woke.
I'm willing to bet they were putting too much effort into ensuring their training data came from ethical sources, creating a paper trail that could allow them to be sued in the future. If they don't know where the data came from, they can feign ignorance and get a tiny fine and use whatever data they want.
Yes. I'm sure Microsoft has enough people from the old days who see where AI is heading and know that now is the time to move fast and damn the consequences. The potential rewards are too great, and any " Ethical concerns" are far more risky as someone could beat them to the punch and eat their slice of the AI pie.
I think that, in the near future, for a lot of fields, the few people remaining with jobs will be the ones who know how to use the AI tools effectively. Just knowing how and what to ask it is a skill in itself. I'm in my 40s and I've noticed that I find myself flummoxed by it in the same way that computers flummoxed older people back in the 80s and 90s.
I also think that future generations are going to be way more comfortable with it than any adult is now. The distinction between "real person" and AI could become less important. Old people (read: us) of the 2040s will be made fun of for being uncomfortable with AIs.
I can also envision a scenario where people begin to rely on personal AIs to handle everyday tasks and online social interactions. It would be like that move Surrogates but instead of robot bodies you'd have an AI facsimile of yourself that learns your mannerisms and the ways you respond to things. It would serve as a buffer between you and the outside world, preserving your autistic bubble and curating what information is allowed inside. It could be how AIs become "sentient", not by emulating the internal processes of the human mind but by creating a realistic simulation of human responses.
It's scary shit, that's for sure. I'm starting to think that it's actually a big deal.
I suppose that makes sense, though I'm skeptical that anyone on an "Ethics and Societal Effects" team knows how to use AI tools effectively.