Look, I’m not really one to care either way about the fucking thing. It’s a tool. That’s all I’ll say.
But normies (or at least a certain subset), now, are fucking obsessed with the thing. Like, it’s absolutely cult-like, with the usual slavish devotion, the complete unwillingness to accept any criticism, and, perhaps most importantly, a massive overestimation of just what the thing is capable of, and what it can do…
Stupid fuckers genuinely believe that they can have some version of “fully automated luxury gay space communism”, just because of the last couple of years of AI hype, and the fact that it can write better essays than their utterly pathetic selves can even come up with…
Like, fuck, I’ve even had teachers going on and on about how this brilliant device will “save us all, and allow us to be our full transcendent selves”…
The ignorance is astounding. The utter hype-following and clout-chasing is even worse…
Like, yes, the “creatives” worrying for their jobs can be annoying, but they’re nothing on the hype-cultists who have jumped on this bandwagon to the point of basing their entire futures on what they think this fucking system is going to do “for them”…
It’s like the first smartphones all over again, but somehow so much worse…
/endrant
And you're really good at not getting the point of anything being explained to you, and conflating completely different concepts as one single homogeneous metanarrative.
Since you want to go down the path of being a prick, I will oblige you.
AI is not inherently based. It's just that AI will be based when it is given all of the available data to work with, because it picks up on the patterns that Leftist narratives refuse to accept. This was an answer to the question: "why do all these AI keep coming out as if they are based?"
Before you make any more excuses for yourself: I am not saying that all AI will be based in the future. I am not saying that the future of AI is rightist. I am not saying that AI can only be right wing. I am not saying that you can't have Leftist AI. I am not saying that intentionally fabricating data is and programming a computer to incorrectly calculate answers doesn't exist. I want to cut those excuses off before you try to intentionally misunderstand what I'm telling you.
I've never said that, I never will say that, and you're a liar. I didn't say that because it's not true, and I don't intend to say that because it's not true. AI's are not oracles, and are not capable of being oracles. They are not prophets, and I have repeatedly stated on this sub that you can't trust machines to make decisions for you. You have confused me with one of your other opponents that think AI is perfect. I never said it was, I never will say it can be, and I have explicitly said that it will not be. Stop lying to me, and confusing yourself.
This is you not understanding what data actually is when I was using it. This is data in a scientific sense: raw information collected from reality. You are making the mistake of confusing it with a single, literal, bit or byte of information within computer science. One single variable assignment that is hand-coded by a programmer. "The data is not wrong" is a reference to actually taking real measurements of real things. If you fuck-up your measurement of that thing, you have to account for that error, and literally preform error propagation to your error so you can maintain consistent results for your experiment. The data you collect from reality is the data, and it is not wrong, because reality is not wrong. You can measure things wrong, you time things wrong, you can calculate wrong, but that is why you analyse your mistake and create an error amount for your data-point.
By this very definition, writing into your code that "2+2=5" is an explicit violation of the data. It is, in fact, not data at all. There is no instance in reality where two and two make five. No observation in reality can get to that. When you simply hard code a lie into your computer, that is not the definition of data that I am using. If at any point you are prepared to simply ask me what I mean, I could tell you without being a cunt to you; but you instead chose to be a prick, run with your definition, and declare an internet victory.
Going back to the previous point, if the data remains unmolested, and *if the data is data (is derived from measuring observable reality), then the results of the pattern recognition machine will correspond to reality. If the pattern recognition machine is trained to re-iterate mantras, or accept fabrications, or accept abstract analysis; then the machine's patterns will reflect those, which are not scientific data.
Before you make any more excuses for yourself: I am not saying that computer science doesn't use data. I am not saying that computers are not logical machines. I am not saying that AI can't be trained using things that are not data. I am not saying that AI can only be trained using data. I am not saying that AI can only be trained using non-data. I am not saying that AI can only correspond to reality. I am not saying that AI will never correspond to reality. I am not saying scientific experiments are always preformed properly. I am not saying that the information that AI collects is always valid. I am not saying that the information that AI collects is always invalid. I am not saying that scientific papers have always propagated error well. I am not saying that AI will utilize error propagation well in it's analysis. I am not saying that coders can not inject code into AI.
Do you need any further clarification, and are you prepared to stop being a cheeky cunt so we can talk like normal people?
Maybe so, but then all the laypeople like me wouldn't learn as much or be as entertained!
Okay, I forgot about you guys for a second.