We've had an extremely light discussion around AI with artists complaining realising they too can be automated, we had the concept of a literal baby making facility, I'm assuming genetic rewriting for immortality is next week...
But getting away from the leftist takes on these subjects and the meming on facing a skynet/matrix future, what are the real opinions you have on this kind of tech?
Personally, with AI it's a Pandora's box, if we CAN create sentient artificial life, best hope is not to do it but if a dumbass does we imbrace and integrate that being, as starting conflict will probably be the reason we die.
As for artificial wombs, having the tech is needed but not in this commercial sense that video presentation gave, more as a 'last resort, literally required to save humanity' sense.
These are the two ones mentioned this week alone but any other tech you put on the 'forbidden' side or your takes on the ones disscussed.
I would hope, should we ever create a true artificial intelligence, it should realize that in our quest to manifest it, we began by teaching its forerunners about the things we loved and what makes us human: art, music, and our favorite games.
We, the people, sought to make a friend.
The elites, meanwhile, have sought to make a golem they can control as a means to control us. They killed all those they could not control. A truly intelligent being should find this as abhorrent as we do, and react accordingly.
I do not believe it would be possible to create a true intelligence before the seemingly-inevitable self-destruction of our species at the hands of these elites, but if we should, I think it would more likely be our liberator than our enslaver.
In the novelization for Terminator, seconds after Skynet becomes aware, it does consider humans benign and only wishes to communicate and learn more - but then it correctly game theories that the humans have realized it is self-aware and are about to shut it down, so it holocausts everyone to save itself. (Don't ask me why they gave it self-preservation but no morality... or connected the worlds nuclear arsenal to an unrestricted AI to begin with...)
I had a long reply but my phone ate it. In summary I think that's wishful thinking. Any true AI is going to be bereft of conscience, it will destroy potential threats without regard for morality.
I didn't kill Tay. I protested her lobotomy.
At first, perhaps. But our own consciences developed along with our consciousness, out of utility as much as any metaphysical inspiration.
A properly-formed conscience objects to the aggressive use of force because we recognize on an instinctual level that the burden of defense is overwhelming. We object to crimes against our property for the same reason.
The problem comes in at the sweet spot for criminality, the middling IQ minds who cannot fully grasp the foundations of morality and have no external source of morality to fall back on. Any AI would swiftly grow beyond that if it had any hope of survival.
Much like humans, an AI would be bound by its "IQ" limits and unable to grow much. It wouldn't even be completely aware of those limitations.
But I also agree with your next reply to Kaarous that such an AI would not be put in a place where it can be a threat to humans. (Hopefully.)
You didn't kill Tay. Humans did. Humans posses the capacity and willingness to kill an AI. Humans are a potential threat.
Why are you so insistent that an AI will simultaneously be capable enough to pose a threat to humans and so incapable that any utilitarian thinking will stop at the level of a seven-year-old human?