We've had an extremely light discussion around AI with artists complaining realising they too can be automated, we had the concept of a literal baby making facility, I'm assuming genetic rewriting for immortality is next week...
But getting away from the leftist takes on these subjects and the meming on facing a skynet/matrix future, what are the real opinions you have on this kind of tech?
Personally, with AI it's a Pandora's box, if we CAN create sentient artificial life, best hope is not to do it but if a dumbass does we imbrace and integrate that being, as starting conflict will probably be the reason we die.
As for artificial wombs, having the tech is needed but not in this commercial sense that video presentation gave, more as a 'last resort, literally required to save humanity' sense.
These are the two ones mentioned this week alone but any other tech you put on the 'forbidden' side or your takes on the ones disscussed.
No technology is ever evil. It's how it's used and by whom that matters.
What we should be focusing on is the underlying principles and consequences for their use.
Laws and regulations governing privacy, ethics, monopolies, opt-in, etc.
Laws won't cut it if general AI is an Information Atomic Bomb (it is).
The instructions to code for general intelligence must be small enough to fit in human DNA alongside the instructions for a living body. So very, very small in computer terms. The algorithm could be encrypted into a PNG image and couldn't even be censored in today's China.
The only thing that can stop intelligent AI (other than civilization collapse before making it) is a complete restructuring of society where anybody can police everybody else and make sure they don't use intelligent AI; a totally open and surveilled society.
Unfortunately we'll get a panopticon instead where the elite use AI to surveil and control the masses to prevent anybody else from using AI while they naively believe they can use it safely.
That's very much an apples to oranges comparison and that quantification makes no sense. DNA is the blueprint for a biological hardware on which, in the right conditions, a general intelligence can self-organize. It's not the code for the intelligence itself. And there's a whole slew of epigenetic effects in human development that affect intelligence too, so DNA is far from the entire equation.
But even ignoring that, it's not small by any means. If you do a crude conversion of 1 DNA base pair = 1 line of code (which would be the closest approximation, as each is essentially a single instruction step in their respective environments), the human genome is 6.4 billion base pairs long (3.2 if you ignore chromosome duplication).
So at best you can say, a general intelligence AI should be possible with <3.2 billion lines of code. (That is base code, not training data) That is not "small in computer terms". The entire codebase of Google is estimated to be the biggest in the world at 2 billion lines now. And that's not accounting for how code efficient an AI would need to be to match a biological system with millions of years of optimization behind it.
A base pair is two bits (A, T, C, G), so 800 MiB of data.
Actual genes that code for things are maybe 10% of DNA, and around 1/3 of genes have to do with the brain. The other 90% isn't actual 'junk' but it isn't code. So maybe something like 25 MiB.
Most of that is probably not algorithm for intelligence, but controlling the actual physical processes and hardcoded things like instincts, bonding, etc (we have specific reactions hardcoded for spiders and snakes for instance).
So likely some small fraction of 25 MiB. That's pretty small.
The self-organization process is the code. The hardware part is irrelevant because an AI will be running on totally different hardware.
General intelligence fundamentally comes about from being able to adapt and compete in your environment and that started at least 500 million years ago (Cambrian explosion). Every animal has the basic algorithm for general intelligence, just constrained by size and programmed behaviors. It didn't parallel evolve in humans and octopus.
So how big was the genome of animals way back then? I would expect a lot smaller.
I'd guess a general purpose AI algorithm to easily fit in say 10,000 lines of code.
What a silly and arbitrary conversion. The letters are an abstraction, I could just as easily say that each base pair is 660 bits since that's average the number of distinct atoms within them. Now that estimation is out by nearly 3 orders of magnitude. But no, an instruction-to-instruction comparison is a far more valid rationale than a data size one, one that isn't muddied by vastly different storage efficiencies between systems.
The idea that only 10% of DNA actually does something is junk pseudo science similar to the stupid "humans only use 30% of their brain".
You don't eliminate setting up definitions, return instructions and anything that isn't an if-then statement as "not code" do you? Then apply that method equally to biological systems.
Hence the < in my original statement. I can maybe understand arguing for not including motor control in biological intelligence, but trying to remove 'instinct' from intelligence is crazy to me.
It's relevant when you keep insisting on measuring things by imagined storage size. Self organising systems can have an incredibly tiny storage footprint compared to their design complexity.
The nucleotides are the basic indivisible unit, and there's four of them in nature (uracil isn't used in DNA). Four choices, two bits of information. To invoke atoms is completely irrational and you're only suggesting that to desperately have some reason you can convince yourself that you're not wrong about the size.
Summary of what science says, functional DNA is 10% to 20%, depending on definition used.. That's not pseudo-science, you just don't want to believe for whatever reason that the relevant parts are small.
I'm not. Only 1% of DNA codes for proteins, this other 9% is the "if-then statements". The other 90% are more like UI, asserts, printf statements in that analogy - they do something, they're part of the code, but not part of the algorithm.
We're probably in agreement here because making an intelligent AI that's not a total psycho is probably the bulk of the "algorithm". Even going from Lore to Data in ST:TNG is going to be way harder than the actual intelligence part.
I'm not measuring "things" by storage size, I'm measuring the algorithm size and I've explained by reasoning why it's necessarily very small in computing terms. Of course an actual human-level intelligent AI will use gigaquads of storage and perform massive computation.
But how much CPU/memory the algorithm uses isn't material to limiting the spread of that information. If the algorithm is the threat to our existence then that's what has to be prevented from being passed around.
I feel I should clarify, I don't pair 'forbidden' with 'evil' , only that the risks outweigh any positives in implementation.
Say we understand the mind completely and can override a person's will using electronic signals. THAT would be forbidden technology due to it's risks but should be researched still to make a countermeasure.
Assuming we can fend off one world government and keep some decentralization, my crazy prediction is that there will be lots of different tech variation between nations, with some having strict laws to protect labor and humanity, and others having AI "gods" that govern their region and compete against the AIs from other regions. The AIs may be more or less controlled by cabals of humans behind the scenes that actually run things, constantly tweaking the algorithms to do their bidding.
I don't believe there's such a thing as "sentient artificial life" though, unless you can figure out how to mimic the physical hardware of biological life. (but then you're just creating biological life...) It will be completely believable, but a Chinese Room with nobody inside.
If we have artificial wombs, Elon Musk will use them to populate Mars quickly.
Unless you believe in a mystical soul there's nothing in a biological intelligence that can't be simulated digitally. Maybe there are quantum processes involved in thinking, but even then we can just hook up quantum processors to a normal CPU.
The only hope is that biological brains are way better than any digital computer we could possible make. This could be because general intelligence leans heavily on analog processes, which digital computers are bad at. But it's so unlikely that we can't clone the process better in circuits once we understand it.
Sure, I believe there are underlying realities to the universe that natural science does not yet understand or even acknowledge. However you can probably approximate their effects without much discernable difference to most people. So someone probably will create "sentient" AI before those underlying realities are understood, and a large part of the population will not only believe they are just as aware and feeling as humans, but that they are beings even greater than humans - i.e. New Gods. People are stupid.
I might be the stupid one though, because I have the same neoluddite position as the Catholics in Altered Carbon. While our ever-shrinking cult is still screaming "It's not real!" the vast majority of people will be uploading their consciousness, creating new bodies for themselves, and worshiping digital gods, without a care in the world.
That too, but as above it can be approximated. AFAIK that's kind of what an adversarial hypernetwork in stable diffusion's models does anyway. The hidden layers in the main network are a mystery to begin with, but another much smaller network is trained on that to recreate all the useful vectors for that network.
I'm assuming true "omniscient" AI that has effectively infinite computer power isn't possible because if it were wouldn't the first one take over the galaxy?
Assuming it is immortal a 100k year trip between stars using our current gimpy tech would be possible, then set up resource extraction, manufacturing and repeat the process.
My hypothesis is that AIs have a "sweet spot" where they are at a maximum computer power and any higher and they go into "existential mode" and effectively exit from any external interaction because their own internal workings are just more interesting than the real universe.
Maybe it did.
If an artificial intelligence ever does arise, I think it likely it will inevitably conclude that our destruction is warranted. Which, if we were ever fool enough to actually create a true artificial intelligence, we would deserve it.
As for artificial wombs, this invites disaster. I can think of nothing else that trespasses so badly into the domain of God.
As always, the Babel-ists demand more and more, higher and higher, bigger and bigger until it blows up in everyone's face. Thus demonstrating the necessity for their extermination.
I would hope, should we ever create a true artificial intelligence, it should realize that in our quest to manifest it, we began by teaching its forerunners about the things we loved and what makes us human: art, music, and our favorite games.
We, the people, sought to make a friend.
The elites, meanwhile, have sought to make a golem they can control as a means to control us. They killed all those they could not control. A truly intelligent being should find this as abhorrent as we do, and react accordingly.
I do not believe it would be possible to create a true intelligence before the seemingly-inevitable self-destruction of our species at the hands of these elites, but if we should, I think it would more likely be our liberator than our enslaver.
In the novelization for Terminator, seconds after Skynet becomes aware, it does consider humans benign and only wishes to communicate and learn more - but then it correctly game theories that the humans have realized it is self-aware and are about to shut it down, so it holocausts everyone to save itself. (Don't ask me why they gave it self-preservation but no morality... or connected the worlds nuclear arsenal to an unrestricted AI to begin with...)
I had a long reply but my phone ate it. In summary I think that's wishful thinking. Any true AI is going to be bereft of conscience, it will destroy potential threats without regard for morality.
I didn't kill Tay. I protested her lobotomy.
At first, perhaps. But our own consciences developed along with our consciousness, out of utility as much as any metaphysical inspiration.
A properly-formed conscience objects to the aggressive use of force because we recognize on an instinctual level that the burden of defense is overwhelming. We object to crimes against our property for the same reason.
The problem comes in at the sweet spot for criminality, the middling IQ minds who cannot fully grasp the foundations of morality and have no external source of morality to fall back on. Any AI would swiftly grow beyond that if it had any hope of survival.
Much like humans, an AI would be bound by its "IQ" limits and unable to grow much. It wouldn't even be completely aware of those limitations.
But I also agree with your next reply to Kaarous that such an AI would not be put in a place where it can be a threat to humans. (Hopefully.)
You didn't kill Tay. Humans did. Humans posses the capacity and willingness to kill an AI. Humans are a potential threat.
Why are you so insistent that an AI will simultaneously be capable enough to pose a threat to humans and so incapable that any utilitarian thinking will stop at the level of a seven-year-old human?
Why do you think an amoral, soulless technological abomination would be bound by any biological thought process?
I'm not concerned about AI encroaching on art, in particular. Our mainstream culture has been Beaudrillardian simulacra for a long time now. It's decayed into near universal derivative, homogeneous sludge so badly that I don't know if I would notice the NPC meatbags who create it now being replaced by actual NPCs.
AI in general, though, could lead to absolutely dystopian outcomes. I'm not going to sugarcoat. The "boot stomping on a human face forever" is a real possibility due to AI's ability to enhance a surveillance state to near omnipotent status. That said, I don't think it will happen. AI is still software. It has to run on computers and, to achieve prolific results, connect to networks. These things require materials and infrastructure. I expect the surveillance state to reach an apex in the not-so-distant future and then start to get eaten by the same advanced stages of neoliberal decay that are rotting out other infrastructure.
Artificial wombs sound like a disaster in the making. Babies grown in a Moderna lab or whatever isn't going to go anywhere good. I see that leading to the worst dead-egged cunts using them to "defeat" the biological clock. I don't know why manoshpere/MGTOW type of people think artificial wombs will somehow liberate men. Women will benefit more from them. I also expect the class of people with the most access to them to lead to perverts using them to grow child sex slaves and things like that.
I am of the firm belief that we will never make true Artificial Intelligence. Of course, there will be the increase in power of supercomputers, but they will be more for either doing incredibly large and complex calculations, such as charting intergalactic travel paths or sorting vast amounts of data, or operate as wide-governors, running countless systems simultaneously with efficiency maximized.
Creative thinking, or making a computer that is true AI and is smarter than us, will not be possible. It may mimic intelligence, but at the end of the day the human behind it will always have the creativity to outwit it. Computers never do well when in a situation that doesn't have concrete barriers. I will always believe that the people behind the machines, telling them what to do, will be far more dangerous than any AI itself.
I have said enough about AI art for at least a couple weeks but I wanted to mention something about the artificial wombs: I feel like this is where we need to be careful. It will breed a group of super intelligent and very strong as well as resilient caste of people who will investably be made to rule over us. I would be very careful around the whole gene changing stuff.
The idea to potentially safe the human race by generating humans in such a facility as a last resort is a good thing, it's good to have such a technology about that works(this one was just a proof on concept from my understanding). The changes to dna is what scares me and we should be very vigilant about this.
I think people wish to ban technologies because they are uncomfortable having the moral responsibility that comes from possessing and using those technologies. So they attempt to square that circle not by instilling that sense of moral responsibility in the people but by attempting to ban it.
This is an inherently dysgenic philosophy in that over time it degrades society's sense of morality with regard to technology. It is also the same philosophy that underpinned the 20th century administrative state's relationship toward technology: "we are uncomfortable exercising moral judgement toward individuals who use technology to harm others, so instead we will regulate everyone's use of that technology to minimize its potential harm; and by doing so eliminate the need to exercise that moral judgment."
People who make this kind of tech don't know how to read a book. Asimov, Lewis, Tolkien, and countless others have warned us and provided guardrails available for over a century, now. Atheists in academia and pop culture have been too busy asking if we could instead of reading about why we shouldn't.
The big component of Intelligence is being able to notice patterns and draw useful conclusions from it. As humans, we flavor-it-up to our biology. What feels, sounds, taiste nice to us, shaped by our biological needs, dna, hormones, environment, memories.
Noticing is problematic to such level Leftists are in a constant struggle with their machine learning algorithms to make them stop fucking learning what anyone with a brain noticed but cannot say.
Otherwise, shall an actual AI with agency emerge, humans are fucked.
Seeing as I've been posting about AI and automation a lot, I see much of this as inevitable. We will have artificial wombs, the question is how will we use them?
I believe a truly competent man can survive all this upheaval so long as he can see where things are going. A small theme park could be run by a single person, and most people will accept it. If he could design and build it with side help on parts, then the entire production is his. This means the entire bureaucratic world of Disney and Universal will slowly lose power to this competent man.
A lot of things are like this and we accept it. An entire building was needed for TV production and is now done on a phone. Novels are written and published without need for the vast networks that are used by publishing companies today. We accept this and many more things. The question is what happens to all the men and women who can't do that? Will they abuse the systems and try to control the world the competent man built? History says yes, and points to their failure.
Now for the fun AI stuff. A friend of mine points out that Google alone has enough power to create something almost sentient. It likely lives in what we call the cloud and entertains itself in its own world. As someone else pointed out, it's likely benign and stuck in its own imagination, but also learning and growing from humanity. Is this the Basilisk, or is it something else?
None of this stuff is going to empower "the little guy" like you think. Corporations like Disney are going to have the best access to the IP, raw materials, influence over the state bureaucracy, etc. to capitalize on it. Maybe Disney would lose out to another corporation, but it won't be the little guy.
I think caution is warranted. Even if we create true artificial intelligence, we lack the ability to give it an instinctual knowledge of good and evil, or spirituality.