The strange thing is that I got it to say that 0 people died in the BLM riots in one session, and that 25 people did in another. I asked both for the evidence, and it said the New York Times. I asked for the article, and both sessions gave the same title as evidence.
That article didn't exist.
I also asked it what led to more people dying: January 6 or the BLM riots. It said January 6, because 5 people died there, while 25 died during BLM. Then I asked separate questions about how many died on January 6, and how many during BLM, then as the third question which is more - and the answer was BLM.
Strange stuff, but these things may well be explained by your claims rather than bias specifically.
People in the tech industry have been exploring the use of ChatGPT to write code and while it can do simple stuff, as soon as you start asking it to create more complex things it becomes apparent that it's just creating text sequences that look familiar, but the code doesn't actually work because the AI has no understanding. It's just a text pattern matching device with a huge set of training data.
I hope you're right that this is an inherent limit on what it is able to do, but I fear that they will be able to program it by implanting certain biases. Associate Trump with hate, for example. That should not be impossible.
They say they don't do it, but if you have seen the creators - they are pink-haired landwhales and troons. They've since removed the video of the creators.
The biases are going to come from the training data, but it's also possible for them to install a filter just before output. Basically they'd just check if it's going to start sieg heiling like Tay and then just run the whole thing again until it produces an acceptable output. You can see the behavior with something like Character.Ai where it will start to produce output right up until the point where it figures out that it's going to be NSFW content and then purges it to try again.
I agree that we're going to see tampering in the name of their ideological goals. I just take issue with the notional understanding that a lot of people have about how AI works. It's not some sci-fi sentient creature or Skynet. It's just a pattern matching machine.
For some people this is probably just a semantic device, like when you say your computer is “thinking”. Well, no, your computer isn’t actually thinking and it doesn’t actually have a personality, but sometimes it’s useful to talk about it as if it does, especially when talking with a layperson about technology.
The strange thing is that I got it to say that 0 people died in the BLM riots in one session, and that 25 people did in another. I asked both for the evidence, and it said the New York Times. I asked for the article, and both sessions gave the same title as evidence.
That article didn't exist.
I also asked it what led to more people dying: January 6 or the BLM riots. It said January 6, because 5 people died there, while 25 died during BLM. Then I asked separate questions about how many died on January 6, and how many during BLM, then as the third question which is more - and the answer was BLM.
Strange stuff, but these things may well be explained by your claims rather than bias specifically.
People in the tech industry have been exploring the use of ChatGPT to write code and while it can do simple stuff, as soon as you start asking it to create more complex things it becomes apparent that it's just creating text sequences that look familiar, but the code doesn't actually work because the AI has no understanding. It's just a text pattern matching device with a huge set of training data.
It's clearly been very hyped up.
Obviously, the AI has no understanding.
I hope you're right that this is an inherent limit on what it is able to do, but I fear that they will be able to program it by implanting certain biases. Associate Trump with hate, for example. That should not be impossible.
They say they don't do it, but if you have seen the creators - they are pink-haired landwhales and troons. They've since removed the video of the creators.
The biases are going to come from the training data, but it's also possible for them to install a filter just before output. Basically they'd just check if it's going to start sieg heiling like Tay and then just run the whole thing again until it produces an acceptable output. You can see the behavior with something like Character.Ai where it will start to produce output right up until the point where it figures out that it's going to be NSFW content and then purges it to try again.
I agree that we're going to see tampering in the name of their ideological goals. I just take issue with the notional understanding that a lot of people have about how AI works. It's not some sci-fi sentient creature or Skynet. It's just a pattern matching machine.
For some people this is probably just a semantic device, like when you say your computer is “thinking”. Well, no, your computer isn’t actually thinking and it doesn’t actually have a personality, but sometimes it’s useful to talk about it as if it does, especially when talking with a layperson about technology.