The F1 student Visa fraud explained :
(twitter.com)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (39)
sorted by:
The problem is AI operates off cold, hard logic. It doesn’t think or reason, it uses information gleaned from datasets and identifies the underlying patterns. If you don’t have an AI that is geared towards maximal truth, even if it says bad things, it is essentially lobotomized as you are short circuiting its ability to use its pattern recognition skills to identify emergent trends.
All “woke” AI companies see this, and most likely have an airgapped, non lobotomized version they are using internally. That is precisely what I would do.
This is wrong. You don't understand what AI models are doing at all if you believe this to be true.
Yeah, it's very much fuzzy Bayesian logic.