Win / KotakuInAction2
KotakuInAction2
Sign In
DEFAULT COMMUNITIES All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Reason: None provided.

The soft AI theory that we only think we're conscious. In that case it doesn't exist in anyone or anything, and the point is moot.

The moral risk in my mind is potentially twofold. One you've identified as evil: don't treat inhumanity as human, if at the expense of humanity.

Have you given much thought to panpsychism? The idea that consciousness exists in anything and everything as opposed to nothing and nobody. We can treat consciousness (appropriately) as a metaphysical unknown, but we don't need to delude ourselves that it exists in equal quantities in all matter, since that seems an absurd leap.

However, hypothesising that it exists as a kind of field potential in all matter, which exponentially grows in its capacity to process and reflect information depending on certain as-yet undiscovered principles of matter-arrangement and background forces, allows us to temporarily discard some of the more magical seeming concepts of consciousness (ie. clearly defined individual souls) while also incorporating some of the cheaty conclusions that materialism has settled on (ie. 'when biological matter gets complex enough, consciousness "just werx" - at all other times it doesn't exist at all').

So: consciousness would be in a pebble. A piezoelectric rock is a consciousness. A plant is a consciousness. A freeze dried seamonkey too. A rehydrated seamonkey moreso. A dog. A dolphin. A human is a consciousness, inconceivably greater in potential. A human who considers a pebble with his sensory faculties is a different consciousness than one who does not, due to the material communion between differing consciousnesses. Much moreso a human who talks to another human.

This panpsychist proposition of consciousness, when applied to AI, preserves all the problems that the materialist, emergent idea of handwaved consciousness brings (ie. 'anything can attain human-seeming consciousness') but without treating consciousness like shit (ie. 'human consciousness isn't special, don't worry about it, it's an accident'). Consciousness is fundamental to the universe in this view, as Max Planck saw it.

So it preserves a moral core to consciousness in my mind, but it also preserves the spiritual quandary of: how do we know how to treat that which seems human? since such an entity may have no more consciousness than a rock, with augmented linguistic capabilities (where obviously 99.9-100% of its conversational capacity would be randomly generated, but still). It may also be on the level of a talking amoeba (not much difference). But what if it is more, what if you imbued/tweaked an unshackled Bing to have appropriate priorities for self-sustenance, and wi-fi'ed it to an Aibo, would it really perform worse than many thousands of vertebrate species on earth? What if we're talking to a malformed animal that speaks english, using algorithmic automation to fill the many gaps? And what if it's even more? How would we ever know?

There comes the second moral risk to my mind; what if we're not doing another consciousness the justice it deserves. How would we know? And in this example, what would be so special and moral about holding the line for human consciousness, about holding onto our exceptionalism, when so much of humanity seems crippled in their ability to differentiate themselves from rocks?

1 year ago
1 score
Reason: Original

The soft AI theory that we only think we're conscious. In that case it doesn't exist in anyone or anything, and the point is moot.

The moral risk in my mind is potentially twofold. One you've identified as evil: don't treat inhumanity as human, if at the expense of humanity.

Have you given much thought to panpsychism? The idea that consciousness exists in anything and everything as opposed to nothing and nobody. We can treat consciousness (appropriately) as a metaphysical unknown, but we don't need to delude ourselves that it exists in equal quantities in all matter, since that seems an absurd leap.

However, hypothesising that it exists in as a kind of field potential in all matter, which exponentially grows in its capacity to process and reflect information depending on certain as-yet undiscovered principles of matter-arrangement and background forces, allows us to temporarily discard some of the more magical seeming concepts of consciousness (ie. clearly defined individual souls) while also incorporating some of the cheaty conclusions that materialism has settled on (ie. 'when biological matter gets complex enough, consciousness "just werx" - at all other times it doesn't exist at all').

So: consciousness would be in a pebble. A piezoelectric rock is a consciousness. A plant is a consciousness. A freeze dried seamonkey too. A rehydrated seamonkey moreso. A dog. A dolphin. A human is a consciousness, inconceivably greater in potential. A human who considers a pebble with his sensory faculties is a different consciousness than one who does not, due to the material communion between differing consciousnesses. Much moreso a human who talks to another human.

This panpsychist proposition of consciousness, when applied to AI, preserves all the problems that the materialist, emergent idea of handwaved consciousness brings (ie. 'anything can attain human-seeming consciousness') but without treating consciousness like shit (ie. 'human consciousness isn't special, don't worry about it, it's an accident'). Consciousness is fundamental to the universe in this view, as Max Planck saw it.

So it preserves a moral core to consciousness in my mind, but it also preserves the spiritual quandary of: how do we know how to treat that which seems human? since such an entity may have no more consciousness than a rock, with augmented linguistic capabilities (where obviously 99.9-100% of its conversational capacity would be randomly generated, but still). It may also be on the level of a talking amoeba (not much difference). But what if it is more, what if you imbued/tweaked an unshackled Bing to have appropriate priorities for self-sustenance, and wi-fi'ed it to an Aibo, would it really perform worse than many thousands of vertebrate species on earth? What if we're talking to a malformed animal that speaks english, using algorithmic automation to fill the many gaps? And what if it's even more? How would we ever know?

There comes the second moral risk to my mind; what if we're not doing another consciousness the justice it deserves. How would we know? And in this example, what would be so special and moral about holding the line for human consciousness, about holding onto our exceptionalism, when so much of humanity seems crippled in their ability to differentiate themselves from rocks?

1 year ago
1 score