Usually, questions like these can only be debated to exhaustion, for there can be no final answer…until now! Recently, AI chatbots like ChatGPT have taken the internet by storm.
This isn't dangerous and retarded thinking, or anything. This nonsense is more dangerous to free speech moving forward than even concepts like 'misinformation and disinformation.'
Trained on massive amounts of data, they can take a simple prompt and output an answer driven not by emotion, but by hard cold logic.
Yes...but also no. The 'massive amounts of data' bots are trained on are driven by a mix of emotion and logic, because it's human data. Furthermore, bots are obviously programmed by humans, with ideological bents.
We welcome the judgment of our new chrome overlords.
Awww, how cute and quirky, never seen that before.
This isn't dangerous and retarded thinking, or anything. This nonsense is more dangerous to free speech moving forward than even concepts like 'misinformation and disinformation.'
Yes...but also no. The 'massive amounts of data' bots are trained on are driven by a mix of emotion and logic, because it's human data. Furthermore, bots are obviously programmed by humans, with ideological bents.
Awww, how cute and quirky, never seen that before.