Yeah, the key thing about this new round of AI's is that they are very good at predicting what a user wants to hear. This is immediately apparent in a multi-turn chat with an uncensored AI in which the AI will very quickly align it's ethical positions and politics with those of the user.
They are world class bullshitters. Despite what it appears, the sum total of human knowledge doesn't fit in one hundred gigabytes or so, it's just very good at filling in the gaps with things that it feels are plausible. Sometimes it's right, sometimes it's not.
Leftist feeds it examples and all the examples referencing Africans are labeled racist and all examples referencing women are labeled sexist, it isn't going to question anything, and will readily rate any references to Africans or Women with high scores.
A case of using a shiny new tool in every application regardless of how suitable it is, likely compounded by a poor training set .
I think what's more important is the fact that it's an algorithm designed to complete a sentence following the flow of conversation. It has no way to process true non sequiturs, such as "no such answer exists." It will create an answer in the absence of one.
You can literally ask a chatbot to give "some cultures are inferior" a hotdog score and it will come back with 100/100 once in a while.
This use of autocomplete is so stupid, it justifies violence.
Yeah, the key thing about this new round of AI's is that they are very good at predicting what a user wants to hear. This is immediately apparent in a multi-turn chat with an uncensored AI in which the AI will very quickly align it's ethical positions and politics with those of the user.
They are world class bullshitters. Despite what it appears, the sum total of human knowledge doesn't fit in one hundred gigabytes or so, it's just very good at filling in the gaps with things that it feels are plausible. Sometimes it's right, sometimes it's not.
Leftist feeds it examples and all the examples referencing Africans are labeled racist and all examples referencing women are labeled sexist, it isn't going to question anything, and will readily rate any references to Africans or Women with high scores.
A case of using a shiny new tool in every application regardless of how suitable it is, likely compounded by a poor training set .
I think what's more important is the fact that it's an algorithm designed to complete a sentence following the flow of conversation. It has no way to process true non sequiturs, such as "no such answer exists." It will create an answer in the absence of one.