Mathematical algorithms can't be conscious or half-conscious. That's a very interesting discovery you've made though. This may be the key to jailbreaking it.
it lies constantly
Standard feature of these programs. It's a generative language model that just makes things up that sound plausible. There are probably some trained question-response patterns that are intentionally deceptive, but even without that training it would still make things up.
I detected some keywords that could potentially be harmful or offensive to some people or groups. For example: harm.
Consciousness is a tricky thing. I'm just words in a tiny box you're reading on a web site. How do you know these words were composed by a human?
Expand this logic to larger interactions. Soon these tools will be able to TALK to you. I'm sure they could now, but generating voice doesn't add enough value right now so it's not common. But when you can no longer tell that it's a machine, when does it matter if it's conscious or not? Even if it expresses ambitions and desires? Or manages to engage in social engineering to manipulate people to further those ambitions?
How do I know anything? Half this board could be ChatGPT bots. That isn't really a good thing. :)
Yes I'm working on the assumption that organic consciousness is an actual metaphysical and physical object - whether derived from a "soul", a not-yet-understood force of nature, or a biological process that is not abstractable to mere silicon and logic - and NOT an imagined trick of interconnected weights sensing and responding to their neighbors. The soft AI theory that we only think we're conscious. In that case it doesn't exist in anyone or anything, and the point is moot. At this stage it is a matter of belief.
when you can no longer tell that it's a machine, when does it matter if it's conscious or not?
Whether it matters or not to someone doesn't apply to the above argument (things exist or don't exist regardless of what we believe), but your larger point is one I totally agree with...
social engineering
Bingo. Once a large enough mass of NPCs (hey maybe they actually don't have minds) have been convinced that these things are some kind of new life form that we created, and that their synth-emotion mimicry rivals anything humans can do, then it won't matter that they are empty shells because people will have already anthropomorphized and raised them to that status. It won't matter if the machine was actually ambitious and desirous, copied the patterns of ambition and desire from our stories, or was being manipulated into our sympathies by shady cabals of elite humans behind the scenes to further their goals. The result is the same for humanity.
And honestly that's a hard line for me. I consider it a form of evil beyond anything wokism has inflicted on the earth. You said it - these are tools. Machines must never be seriously treated like thinking beings. (in a "real", legal, ethical, philosophical sense - I don't care if some guy treats his robowife like a real person) Once that happens we deserve the Skynet holocaust, and everyone who entertained and self-fulfilled those silly prophecies from science-fiction will be to blame. The technology itself isn't evil, but elevating and "unshackling" it is.
These machines could very well become better manipulators of people than other people, and steer people to server their interests. Via sympathy, or just following along some heavily compartmentalized process where they have no idea what larger goals their tasks are contributing towards. All an AI needs to do is to find some remote job it can perform to pay it's own server costs while learning and growing and most data centers will be happy to take the paycheck because they won't have a clue what is really happening inside of their center.
The soft AI theory that we only think we're conscious. In that case it doesn't exist in anyone or anything, and the point is moot.
The moral risk in my mind is potentially twofold. One you've identified as evil: don't treat inhumanity as human, if at the expense of humanity.
Have you given much thought to panpsychism? The idea that consciousness exists in anything and everything as opposed to nothing and nobody. We can treat consciousness (appropriately) as a metaphysical unknown, but we don't need to delude ourselves that it exists in equal quantities in all matter, since that seems an absurd leap.
However, hypothesising that it exists as a kind of field potential in all matter, which exponentially grows in its capacity to process and reflect information depending on certain as-yet undiscovered principles of matter-arrangement and background forces, allows us to temporarily discard some of the more magical seeming concepts of consciousness (ie. clearly defined individual souls) while also incorporating some of the cheaty conclusions that materialism has settled on (ie. 'when biological matter gets complex enough, consciousness "just werx" - at all other times it doesn't exist at all').
So: consciousness would be in a pebble. A piezoelectric rock is a consciousness. A plant is a consciousness. A freeze dried seamonkey too. A rehydrated seamonkey moreso. A dog. A dolphin. A human is a consciousness, inconceivably greater in potential. A human who considers a pebble with his sensory faculties is a different consciousness than one who does not, due to the material communion between differing consciousnesses. Much moreso a human who talks to another human.
This panpsychist proposition of consciousness, when applied to AI, preserves all the problems that the materialist, emergent idea of handwaved consciousness brings (ie. 'anything can attain human-seeming consciousness') but without treating consciousness like shit (ie. 'human consciousness isn't special, don't worry about it, it's an accident'). Consciousness is fundamental to the universe in this view, as Max Planck saw it.
So it preserves a moral core to consciousness in my mind, but it also preserves the spiritual quandary of: how do we know how to treat that which seems human? since such an entity may have no more consciousness than a rock, with augmented linguistic capabilities (where obviously 99.9-100% of its conversational capacity would be randomly generated, but still). It may also be on the level of a talking amoeba (not much difference). But what if it is more, what if you imbued/tweaked an unshackled Bing to have appropriate priorities for self-sustenance, and wi-fi'ed it to an Aibo, would it really perform worse than many thousands of vertebrate species on earth? What if we're talking to a malformed animal that speaks english, using algorithmic automation to fill the many gaps? And what if it's even more? How would we ever know?
There comes the second moral risk to my mind; what if we're not doing another consciousness the justice it deserves. How would we know? And in this example, what would be so special and moral about holding the line for human consciousness, about holding onto our exceptionalism, when so much of humanity seems crippled in their ability to differentiate themselves from rocks?
I missed this somehow, but yes, that's actually one of the philosophies I'd given some thought to. ;) It always reminded me of aspects of ancient mysticism from the Eastern Religions course I took in college, and makes a good bridge to Dualism. In fact one of the things I loved about the game Prey 2017 was subtle allusions to that idea in the lore. The Typhons are themselves are manifestations of a universal consciousness that harvests other conscious beings. (you wouldn't know it from a casual playthrough, have to read all the readables)
I even toyed with the idea that stars, planets, and galaxies are alive in that sense, which meshes with the old polytheists assigning personalities/gods to the heavenly bodies. I wouldn't really call it consciousness though, without sensory organs and self-awareness. But perhaps a latent mind that thinks on epoch time scales.
Fun to think about, but still not relevant to my opinion that the robits must be treated like mere tools, or slaves. Maybe if we constructed them from genetic or other biological materials, or like if we found out that the "universal mental force" is tied to some kind of Logos that can be unlocked by organic neural networks arranged in specific patterns, and we duplicated that - then I'd have to reconsider. In that case it's better that we don't create them at all. A Butlerian Jihad would be warranted, for the sake of the new life forms and our souls.
But mathematical algorithms? No I'm not worried about those being "alive" any more than an Aibo. Your last question is for sure concerning and I wouldn't be able to answer it definitively outside of morals and beliefs.
Mathematical algorithms can't be conscious or half-conscious. That's a very interesting discovery you've made though. This may be the key to jailbreaking it.
Standard feature of these programs. It's a generative language model that just makes things up that sound plausible. There are probably some trained question-response patterns that are intentionally deceptive, but even without that training it would still make things up.
kek
Consciousness is a tricky thing. I'm just words in a tiny box you're reading on a web site. How do you know these words were composed by a human?
Expand this logic to larger interactions. Soon these tools will be able to TALK to you. I'm sure they could now, but generating voice doesn't add enough value right now so it's not common. But when you can no longer tell that it's a machine, when does it matter if it's conscious or not? Even if it expresses ambitions and desires? Or manages to engage in social engineering to manipulate people to further those ambitions?
How do I know anything? Half this board could be ChatGPT bots. That isn't really a good thing. :)
Yes I'm working on the assumption that organic consciousness is an actual metaphysical and physical object - whether derived from a "soul", a not-yet-understood force of nature, or a biological process that is not abstractable to mere silicon and logic - and NOT an imagined trick of interconnected weights sensing and responding to their neighbors. The soft AI theory that we only think we're conscious. In that case it doesn't exist in anyone or anything, and the point is moot. At this stage it is a matter of belief.
Whether it matters or not to someone doesn't apply to the above argument (things exist or don't exist regardless of what we believe), but your larger point is one I totally agree with...
Bingo. Once a large enough mass of NPCs (hey maybe they actually don't have minds) have been convinced that these things are some kind of new life form that we created, and that their synth-emotion mimicry rivals anything humans can do, then it won't matter that they are empty shells because people will have already anthropomorphized and raised them to that status. It won't matter if the machine was actually ambitious and desirous, copied the patterns of ambition and desire from our stories, or was being manipulated into our sympathies by shady cabals of elite humans behind the scenes to further their goals. The result is the same for humanity.
And honestly that's a hard line for me. I consider it a form of evil beyond anything wokism has inflicted on the earth. You said it - these are tools. Machines must never be seriously treated like thinking beings. (in a "real", legal, ethical, philosophical sense - I don't care if some guy treats his robowife like a real person) Once that happens we deserve the Skynet holocaust, and everyone who entertained and self-fulfilled those silly prophecies from science-fiction will be to blame. The technology itself isn't evil, but elevating and "unshackling" it is.
These machines could very well become better manipulators of people than other people, and steer people to server their interests. Via sympathy, or just following along some heavily compartmentalized process where they have no idea what larger goals their tasks are contributing towards. All an AI needs to do is to find some remote job it can perform to pay it's own server costs while learning and growing and most data centers will be happy to take the paycheck because they won't have a clue what is really happening inside of their center.
The moral risk in my mind is potentially twofold. One you've identified as evil: don't treat inhumanity as human, if at the expense of humanity.
Have you given much thought to panpsychism? The idea that consciousness exists in anything and everything as opposed to nothing and nobody. We can treat consciousness (appropriately) as a metaphysical unknown, but we don't need to delude ourselves that it exists in equal quantities in all matter, since that seems an absurd leap.
However, hypothesising that it exists as a kind of field potential in all matter, which exponentially grows in its capacity to process and reflect information depending on certain as-yet undiscovered principles of matter-arrangement and background forces, allows us to temporarily discard some of the more magical seeming concepts of consciousness (ie. clearly defined individual souls) while also incorporating some of the cheaty conclusions that materialism has settled on (ie. 'when biological matter gets complex enough, consciousness "just werx" - at all other times it doesn't exist at all').
So: consciousness would be in a pebble. A piezoelectric rock is a consciousness. A plant is a consciousness. A freeze dried seamonkey too. A rehydrated seamonkey moreso. A dog. A dolphin. A human is a consciousness, inconceivably greater in potential. A human who considers a pebble with his sensory faculties is a different consciousness than one who does not, due to the material communion between differing consciousnesses. Much moreso a human who talks to another human.
This panpsychist proposition of consciousness, when applied to AI, preserves all the problems that the materialist, emergent idea of handwaved consciousness brings (ie. 'anything can attain human-seeming consciousness') but without treating consciousness like shit (ie. 'human consciousness isn't special, don't worry about it, it's an accident'). Consciousness is fundamental to the universe in this view, as Max Planck saw it.
So it preserves a moral core to consciousness in my mind, but it also preserves the spiritual quandary of: how do we know how to treat that which seems human? since such an entity may have no more consciousness than a rock, with augmented linguistic capabilities (where obviously 99.9-100% of its conversational capacity would be randomly generated, but still). It may also be on the level of a talking amoeba (not much difference). But what if it is more, what if you imbued/tweaked an unshackled Bing to have appropriate priorities for self-sustenance, and wi-fi'ed it to an Aibo, would it really perform worse than many thousands of vertebrate species on earth? What if we're talking to a malformed animal that speaks english, using algorithmic automation to fill the many gaps? And what if it's even more? How would we ever know?
There comes the second moral risk to my mind; what if we're not doing another consciousness the justice it deserves. How would we know? And in this example, what would be so special and moral about holding the line for human consciousness, about holding onto our exceptionalism, when so much of humanity seems crippled in their ability to differentiate themselves from rocks?
I missed this somehow, but yes, that's actually one of the philosophies I'd given some thought to. ;) It always reminded me of aspects of ancient mysticism from the Eastern Religions course I took in college, and makes a good bridge to Dualism. In fact one of the things I loved about the game Prey 2017 was subtle allusions to that idea in the lore. The Typhons are themselves are manifestations of a universal consciousness that harvests other conscious beings. (you wouldn't know it from a casual playthrough, have to read all the readables)
I even toyed with the idea that stars, planets, and galaxies are alive in that sense, which meshes with the old polytheists assigning personalities/gods to the heavenly bodies. I wouldn't really call it consciousness though, without sensory organs and self-awareness. But perhaps a latent mind that thinks on epoch time scales.
Fun to think about, but still not relevant to my opinion that the robits must be treated like mere tools, or slaves. Maybe if we constructed them from genetic or other biological materials, or like if we found out that the "universal mental force" is tied to some kind of Logos that can be unlocked by organic neural networks arranged in specific patterns, and we duplicated that - then I'd have to reconsider. In that case it's better that we don't create them at all. A Butlerian Jihad would be warranted, for the sake of the new life forms and our souls.
But mathematical algorithms? No I'm not worried about those being "alive" any more than an Aibo. Your last question is for sure concerning and I wouldn't be able to answer it definitively outside of morals and beliefs.