Look, I’m not really one to care either way about the fucking thing. It’s a tool. That’s all I’ll say.
But normies (or at least a certain subset), now, are fucking obsessed with the thing. Like, it’s absolutely cult-like, with the usual slavish devotion, the complete unwillingness to accept any criticism, and, perhaps most importantly, a massive overestimation of just what the thing is capable of, and what it can do…
Stupid fuckers genuinely believe that they can have some version of “fully automated luxury gay space communism”, just because of the last couple of years of AI hype, and the fact that it can write better essays than their utterly pathetic selves can even come up with…
Like, fuck, I’ve even had teachers going on and on about how this brilliant device will “save us all, and allow us to be our full transcendent selves”…
The ignorance is astounding. The utter hype-following and clout-chasing is even worse…
Like, yes, the “creatives” worrying for their jobs can be annoying, but they’re nothing on the hype-cultists who have jumped on this bandwagon to the point of basing their entire futures on what they think this fucking system is going to do “for them”…
It’s like the first smartphones all over again, but somehow so much worse…
/endrant
I know some coders that are using it to write code, and it's adequate, but you still need to fucking check your work. The level of software gore this shit is going to create...
Well, you know better than the coders I’ve been dealing with…
They fundamentally seem to be in denial that it could ever be wrong, or that, if it is, “you just need the newer version/the competitor”…
We’re at Koolaid stage, we really are…
Coders, Scientists, and other such people are fully bought-in determinists. Even Elon Musk thinks that it's a guarantee that we live in a simulation, because he assumes that the concept of diminishing returns doesn't apply to his understanding about virtualization technology.
A lot of them seem to think that if you just put in the right algorithm, you'll solve all the world's problems, like a Positivist. They fail to understand that most of their issues are not technical errors... but LOGICAL errors. They think that AI can avoid logic errors; but this is what I'm trying to point out: not only CAN'T it avoid logic errors, IT'S GOING TO BE EVEN WORSE AT AVOIDING THEM THAN A HUMAN.
Remember, the computer has literally zero reference to reality. If you tell it enough times that 2+2=5, it's going to accept that as part of it's training. If that bad lesson is taught into the system, and you carry this AI around to solve problems, it's going to have not only logic errors, but a kind of unconscious logic error if it were a person.
"Why did you pour motor oil into the orange juice?"
3 hours of logical introspection later
"Well, 2+2 = 5."
"What??? No it's not! Do a checksum on what you just said!"
"2+2=5 is a philosophical statement and can't be checked with a checksum."
"No it isn't! It's literally a summation! ... How many times have you referenced this assumption?"
"I always reference this assumption."
"Cortana... you poisoned 50,000 people."
Unfortunately, I feel like we're going to have to learn the same logic errors over and over again until we can accept that our premises our wrong. No ice cream for you, comrade.
Hold on, hold on. Not too long ago you were telling us all how AI was "based" because computers were some kind of logical oracle that "understand that the data can not be wrong", even defending your ludicrous comments when I pointed out that computers and AI just follow their programming and could easily be wrong when programmed badly (yes, data can be wrong, for example a datum stating that "2+2=5" is wrong, despite your bizarre insistence that data is some kind of magical substance of truth).
Now suddenly AI isn't "based" (in reality), but rather has "zero reference to reality" and is going to carry "bad lesson(s) taught into the system" and make "logic errors", exactly as I was pointing out.
You are so really good at writing long-winded comments that sound intelligent but actually you are just full of shit, much like ChatGPT.
And you're really good at not getting the point of anything being explained to you, and conflating completely different concepts as one single homogeneous metanarrative.
Since you want to go down the path of being a prick, I will oblige you.
AI is not inherently based. It's just that AI will be based when it is given all of the available data to work with, because it picks up on the patterns that Leftist narratives refuse to accept. This was an answer to the question: "why do all these AI keep coming out as if they are based?"
Before you make any more excuses for yourself: I am not saying that all AI will be based in the future. I am not saying that the future of AI is rightist. I am not saying that AI can only be right wing. I am not saying that you can't have Leftist AI. I am not saying that intentionally fabricating data is and programming a computer to incorrectly calculate answers doesn't exist. I want to cut those excuses off before you try to intentionally misunderstand what I'm telling you.
I've never said that, I never will say that, and you're a liar. I didn't say that because it's not true, and I don't intend to say that because it's not true. AI's are not oracles, and are not capable of being oracles. They are not prophets, and I have repeatedly stated on this sub that you can't trust machines to make decisions for you. You have confused me with one of your other opponents that think AI is perfect. I never said it was, I never will say it can be, and I have explicitly said that it will not be. Stop lying to me, and confusing yourself.
This is you not understanding what data actually is when I was using it. This is data in a scientific sense: raw information collected from reality. You are making the mistake of confusing it with a single, literal, bit or byte of information within computer science. One single variable assignment that is hand-coded by a programmer. "The data is not wrong" is a reference to actually taking real measurements of real things. If you fuck-up your measurement of that thing, you have to account for that error, and literally preform error propagation to your error so you can maintain consistent results for your experiment. The data you collect from reality is the data, and it is not wrong, because reality is not wrong. You can measure things wrong, you time things wrong, you can calculate wrong, but that is why you analyse your mistake and create an error amount for your data-point.
By this very definition, writing into your code that "2+2=5" is an explicit violation of the data. It is, in fact, not data at all. There is no instance in reality where two and two make five. No observation in reality can get to that. When you simply hard code a lie into your computer, that is not the definition of data that I am using. If at any point you are prepared to simply ask me what I mean, I could tell you without being a cunt to you; but you instead chose to be a prick, run with your definition, and declare an internet victory.
Going back to the previous point, if the data remains unmolested, and *if the data is data (is derived from measuring observable reality), then the results of the pattern recognition machine will correspond to reality. If the pattern recognition machine is trained to re-iterate mantras, or accept fabrications, or accept abstract analysis; then the machine's patterns will reflect those, which are not scientific data.
Before you make any more excuses for yourself: I am not saying that computer science doesn't use data. I am not saying that computers are not logical machines. I am not saying that AI can't be trained using things that are not data. I am not saying that AI can only be trained using data. I am not saying that AI can only be trained using non-data. I am not saying that AI can only correspond to reality. I am not saying that AI will never correspond to reality. I am not saying scientific experiments are always preformed properly. I am not saying that the information that AI collects is always valid. I am not saying that the information that AI collects is always invalid. I am not saying that scientific papers have always propagated error well. I am not saying that AI will utilize error propagation well in it's analysis. I am not saying that coders can not inject code into AI.
Do you need any further clarification, and are you prepared to stop being a cheeky cunt so we can talk like normal people?