This is a fundamental misunderstanding of the technology. The AI doesn't understand anything, it's just a very sophisticated text pattern matching apparatus. It finds sequences of words and predicts the next words based on its training data.
The strange thing is that I got it to say that 0 people died in the BLM riots in one session, and that 25 people did in another. I asked both for the evidence, and it said the New York Times. I asked for the article, and both sessions gave the same title as evidence.
That article didn't exist.
I also asked it what led to more people dying: January 6 or the BLM riots. It said January 6, because 5 people died there, while 25 died during BLM. Then I asked separate questions about how many died on January 6, and how many during BLM, then as the third question which is more - and the answer was BLM.
Strange stuff, but these things may well be explained by your claims rather than bias specifically.
People in the tech industry have been exploring the use of ChatGPT to write code and while it can do simple stuff, as soon as you start asking it to create more complex things it becomes apparent that it's just creating text sequences that look familiar, but the code doesn't actually work because the AI has no understanding. It's just a text pattern matching device with a huge set of training data.
I hope you're right that this is an inherent limit on what it is able to do, but I fear that they will be able to program it by implanting certain biases. Associate Trump with hate, for example. That should not be impossible.
They say they don't do it, but if you have seen the creators - they are pink-haired landwhales and troons. They've since removed the video of the creators.
It's also allowed to tell falsehoods apparently. There is no more dire oxymoron than putting the words "black" and "excellence" in the same sentence.
This is a fundamental misunderstanding of the technology. The AI doesn't understand anything, it's just a very sophisticated text pattern matching apparatus. It finds sequences of words and predicts the next words based on its training data.
The strange thing is that I got it to say that 0 people died in the BLM riots in one session, and that 25 people did in another. I asked both for the evidence, and it said the New York Times. I asked for the article, and both sessions gave the same title as evidence.
That article didn't exist.
I also asked it what led to more people dying: January 6 or the BLM riots. It said January 6, because 5 people died there, while 25 died during BLM. Then I asked separate questions about how many died on January 6, and how many during BLM, then as the third question which is more - and the answer was BLM.
Strange stuff, but these things may well be explained by your claims rather than bias specifically.
People in the tech industry have been exploring the use of ChatGPT to write code and while it can do simple stuff, as soon as you start asking it to create more complex things it becomes apparent that it's just creating text sequences that look familiar, but the code doesn't actually work because the AI has no understanding. It's just a text pattern matching device with a huge set of training data.
Obviously, the AI has no understanding.
I hope you're right that this is an inherent limit on what it is able to do, but I fear that they will be able to program it by implanting certain biases. Associate Trump with hate, for example. That should not be impossible.
They say they don't do it, but if you have seen the creators - they are pink-haired landwhales and troons. They've since removed the video of the creators.