We've had an extremely light discussion around AI with artists complaining realising they too can be automated, we had the concept of a literal baby making facility, I'm assuming genetic rewriting for immortality is next week...
But getting away from the leftist takes on these subjects and the meming on facing a skynet/matrix future, what are the real opinions you have on this kind of tech?
Personally, with AI it's a Pandora's box, if we CAN create sentient artificial life, best hope is not to do it but if a dumbass does we imbrace and integrate that being, as starting conflict will probably be the reason we die.
As for artificial wombs, having the tech is needed but not in this commercial sense that video presentation gave, more as a 'last resort, literally required to save humanity' sense.
These are the two ones mentioned this week alone but any other tech you put on the 'forbidden' side or your takes on the ones disscussed.
No technology is ever evil. It's how it's used and by whom that matters.
What we should be focusing on is the underlying principles and consequences for their use.
Laws and regulations governing privacy, ethics, monopolies, opt-in, etc.
Laws won't cut it if general AI is an Information Atomic Bomb (it is).
The instructions to code for general intelligence must be small enough to fit in human DNA alongside the instructions for a living body. So very, very small in computer terms. The algorithm could be encrypted into a PNG image and couldn't even be censored in today's China.
The only thing that can stop intelligent AI (other than civilization collapse before making it) is a complete restructuring of society where anybody can police everybody else and make sure they don't use intelligent AI; a totally open and surveilled society.
Unfortunately we'll get a panopticon instead where the elite use AI to surveil and control the masses to prevent anybody else from using AI while they naively believe they can use it safely.
That's very much an apples to oranges comparison and that quantification makes no sense. DNA is the blueprint for a biological hardware on which, in the right conditions, a general intelligence can self-organize. It's not the code for the intelligence itself. And there's a whole slew of epigenetic effects in human development that affect intelligence too, so DNA is far from the entire equation.
But even ignoring that, it's not small by any means. If you do a crude conversion of 1 DNA base pair = 1 line of code (which would be the closest approximation, as each is essentially a single instruction step in their respective environments), the human genome is 6.4 billion base pairs long (3.2 if you ignore chromosome duplication).
So at best you can say, a general intelligence AI should be possible with <3.2 billion lines of code. (That is base code, not training data) That is not "small in computer terms". The entire codebase of Google is estimated to be the biggest in the world at 2 billion lines now. And that's not accounting for how code efficient an AI would need to be to match a biological system with millions of years of optimization behind it.
A base pair is two bits (A, T, C, G), so 800 MiB of data.
Actual genes that code for things are maybe 10% of DNA, and around 1/3 of genes have to do with the brain. The other 90% isn't actual 'junk' but it isn't code. So maybe something like 25 MiB.
Most of that is probably not algorithm for intelligence, but controlling the actual physical processes and hardcoded things like instincts, bonding, etc (we have specific reactions hardcoded for spiders and snakes for instance).
So likely some small fraction of 25 MiB. That's pretty small.
The self-organization process is the code. The hardware part is irrelevant because an AI will be running on totally different hardware.
General intelligence fundamentally comes about from being able to adapt and compete in your environment and that started at least 500 million years ago (Cambrian explosion). Every animal has the basic algorithm for general intelligence, just constrained by size and programmed behaviors. It didn't parallel evolve in humans and octopus.
So how big was the genome of animals way back then? I would expect a lot smaller.
I'd guess a general purpose AI algorithm to easily fit in say 10,000 lines of code.
What a silly and arbitrary conversion. The letters are an abstraction, I could just as easily say that each base pair is 660 bits since that's average the number of distinct atoms within them. Now that estimation is out by nearly 3 orders of magnitude. But no, an instruction-to-instruction comparison is a far more valid rationale than a data size one, one that isn't muddied by vastly different storage efficiencies between systems.
The idea that only 10% of DNA actually does something is junk pseudo science similar to the stupid "humans only use 30% of their brain".
You don't eliminate setting up definitions, return instructions and anything that isn't an if-then statement as "not code" do you? Then apply that method equally to biological systems.
Hence the < in my original statement. I can maybe understand arguing for not including motor control in biological intelligence, but trying to remove 'instinct' from intelligence is crazy to me.
It's relevant when you keep insisting on measuring things by imagined storage size. Self organising systems can have an incredibly tiny storage footprint compared to their design complexity.
I feel I should clarify, I don't pair 'forbidden' with 'evil' , only that the risks outweigh any positives in implementation.
Say we understand the mind completely and can override a person's will using electronic signals. THAT would be forbidden technology due to it's risks but should be researched still to make a countermeasure.