I started working as a data scientist in 2019, and by 2021 I had realized that while the field was large, it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes despite insisting that things like, I dunno, the next five years of a ten thousand person non-tech organization should be entirely AI focused. The number of companies launching AI initiatives far outstripped the number of actual use cases. Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders.
And then some absolute son of a bitch created ChatGPT, and now look at us. Look at us, resplendent in our pauper's robes, stitched from corpulent greed and breathless credulity, spending half of the planet's engineering efforts to add chatbot support to every application under the sun when half of the industry hasn't worked out how to test database backups regularly. This is why I have to visit untold violence upon the next moron to propose that AI is the future of the business - not because this is impossible in principle, but because they are now indistinguishable from a hundred million willful fucking idiots.
I like this guy. :D
If we were to get true AI or something close to it, I'm betting it'd come from Japan or maybe Taiwan.
Why? Because I'm willing to bet that the Japanese population would rather invent Skynet, The matrix and/or become cyborgs than accept ANY increase in immigration if they can't improve birth rates.
Because the West is already compromised thanks to a third world invasion, there isn't the same incentive so 'AI business integration' is just another grift.
I remember seeing a thing on Japan and robots about 20 years ago or so, and the fucking CBC interviewer said something about "If you need more workers, why not let in third world immigrants?" The look on the lady's face was priceless, like she just sniffed a rotting corpse but was still trying to be polite, and she said "We would rather have robots".
They've been having discussions about it for at least 2 decades, I remember there was one Japanese guy that made a robot that looked EXACTLY like him and got people to interact with it.
The premise was to see if people prefer robots/androids that look like other humans or if it were better there was a visible difference. There is a reason I'm less terrified of AI being made there when they're already looking at successful integration than in the West which is still on segregation of groups, also says a lot that they think they can integrate better with artificial intelligence than third worlders....actually with some tourists recently, the west isn't doing that much better..
It's still useful for TPTB in the West though. It's an excuse to flood every company with worthless pajeets who cheated their way through their programming classes.
I think the only true AI we could have is something akin to an artificial brain and it's something I've looked at quite a lot because I find it fascinating which is why I reeee so hard on this topic in particular and fight people over it. I'm writing real sentience, so they establish though patterns through their own mimicry of a human brain but the brain itself would have to have neural pathways and everything.
All these computing algorithms are doing are just spitting out convincing looking results that match our own personal biases and that's not sentient thought in the slightest. When we start seeing Robobrain type stuff popping up in the market and the elites trying to preserve their brains after they die then we can all start shitting bricks.
By the way, we're already partly there thanks to Elon Musks' NeuraLink, the only puzzle left is storing brain matter over the long term and keeping it from rotting away like a human body. Also depending on how effective storing brain matter is in the future we may end up technically cracking immortality and I don't think that technology is far off.
I was about to post this one. You beat me to it.
He's an Indian programming engineer, which means he can program but doesn't understand design or how to use things unless explained directly. So the cool things being made by AI are cool, but it's beyond his ability to comprehend it. All he sees is the fake stuff and the grift.
Perfectly stated. As you can see we have some people like that around here.
We're all exhausted by 'The new AI Skateboard powered by AI!', but in any industry/tech the scammers and grifters around a product are going to rise in proportion to the perceived value of that product, which is still somewhat correlated to the real value of the product even if the hucksters have hyped it beyond all recognition. Someday the hype will die down and only the valuable stuff will remain.
There articles asking why we are still investing in self driving cars. They're getting released soon btw.
I have done the initial designs for a theme park using AI. The trick is to know how it works and how to do your job. Once you do that, it's really useful. However, most guys try to skip one or both of those steps and claim to be revolutionary.
No less than Mercedes promised a self-driving taxi in 2025, and I'm still holding them to that. :)
Cuz I really believed it from them. When some a-hole startup full of 21 yr olds says they're gonna do it, you don't trust them. But Mercedes has actually proven themselves able to design and build things that work well. I mean not that thing but other thigns.
Amazon also had something, but it's the 21 year olds you mentioned.
I was thinking I have no idea when I read it. AI seems promising but very early. Hype happens around every new technology. v1 often has little utility. The market is there to sort out winners and losers, to the extent that it works.
The analogy to natural language software might be good. People take it for granted that a computer can basically write down what you're saying today. The first versions of that sucked. But people acted like it was going to be your primary way of talking to the computer. In the end, like I said, we use it on our Fire TVs, and dictation is quite useful for certain professions. The job of typist has gone out of style. It has not replaced typing, but it is an immensely useful tool, now, whereas the first versions were only really usable to repeatedly fill out forms and charts. Oh and then of course the other direction of that works so well that we have deepfake audio.
I've seen people try to build code with AI as if it were a natural language software. Because the AI still doesn't understand what it's actually supposed to do, people have basically taken the same amount of time to fix it as they would have if they had just built it correctly in the first place.
Yeah that's my experience of the chatbots. They're just crapping out the input, and they know how to rephrase things. I haven't seen one solve an actual problem.
That said, a robot that just does what you tell it to is good. I dunno how intelligent I need these things to be. Chatbots are not really useful to me. But I'm just saying AI applied to, I dunno, cleaning up my kitchen is good. The robot can clean it according to the instructions on YouTube.
Robots are the definition of a "Capital Investment":
"Here is the procedure. Do it forever."
"OK"
If there is any variance in any scope of this situation, the robot will produce poor results.
AI is effectively just 1,000 Indian programmers typing away without context.
AI is also literally 1,000 Indian programmers typing away without context in at least one occassion.
Nah, the only thing he's wrong about is he's waayyy overstating AI's capabilities in terms of LLMs. LLMs are worthless junk, but NNs and DeepLearning and other shit is actually decently useful and decently well deployed in areas such as recognition software where it actually makes sense.
-- Actual Software Engineer who understands what a prediction engine (also known as the entirety of "AI") does and how it works.
Nvidia's 4k upscaling is, I believe, based on neural nets, and it works very well. It really improves the quality of video, and games can be scaled undetectably sometimes.
I see a near term future where Expert Systems hand off specific tasks to Convolutional Neural Networks and then the final product is put into words by a LLN.
This would make tools for specific jobs with repeatable, auditable results.
For example, drafting legal documents, including contracts could be done this way. Another job might be triage and early diagnosis. A specialist triage nurse could be greatly aided by a system that is helpful for picking up rare conditions or non-standard presentation of conditions.
The idea of general AI where people follow the orders of machines programmed to the specification of the Pointy Haired Boss (from Dilbert) is probably what we will get instead. Welcome to the future.
I think your failing to see the point that a major portion of "AI" is just straight up not AI or ML, and is literally just people lying about what the programmers would have built anyway.
Oh I know, I am well aware of marketing bullshit, it's common in the industry to use it to search for VC money.
AI is definitely cool and it can definitely do some amazing things, however he's right to call all the fake shit out and there are tons of different forms of AI out there and these overhyped machine learning algorithms being used for chat bots or art generation are just one of many techniques.
Never thought I'd end up in complete agreement with an Indian on tech but that shows you how fucked the conversation is around AI generally.
He's definitely not an average one and does know how it works. He's also trying to prove he made the right choice to himself, and I can see that in the writing.
This guy sounds like a fuckhead even if he's basically right.
People really don't fucking like it when they have their stupid larp dashed, I'm still glancing through it but this guy really does know what he's talking about.
Anyone else old enough to remember when "Expert Systems" were going to take over every decision making process?
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION
This should be inscribed in stone.
"No computer makes a decision"
Remember the "smart"-crisis? Smart-fridge, smart oven, smart cars, smart-TV. AI is the next evolution of that, except that this time, it's not mostly limited to consumers / individuals, but can also widely be used by companies. And the average suit is just as stupid as the average consumer.
People are obsessed with technology nowadays, so when the latest tech is finally in a working state and presented to the world, they all rush to use it and to apply it to things it's not designed for.
Then, you add to this the fact that fake-intelligence is a very good illusion for consciousness. From a non-tech guy, if he can have a discussion with an AI like ChatGPT, he will (wrongly) imagine the AI has some sort of consciousness of its own, as if it was a real person, rather than just a more complex algorithm of Google search. If AIs weren't allowed to use "I" to talk about themselves, I bet people wouldn't feel as comfortable using them.
I still can't over that so-called engineer from google seriously thinking their shitty chatbot was developing a proto-conscious, and writing an open letter in protest. The guy should not be allowed near computers.
That didn't stop.
Actually I'm trying to figure out how to make money by making a smart silverware drawer or something. Everything needs a CPU. And Wifi. A whole Pi. A lot of these things remind me of jank solutions that I've known people to assemble over the years to solve their own annoyances. It's cute when people make an open source project of their beer-bottle counter. I never really thought "hey everyone needs one".
It Is my personal opinion that since WW2 there has been a fanatical belief that "new=good" and that "change=positive". Well, to quote Cheshire Cat in Alice: Madness Returns (very inspiring source, I know), "change(new) is neither good nor bad, it only means it's not the same anymore".
Ycombinator (one of the most successful VC/accelerator programs of all time) has been ragging on AI for a while now.
they get a million applications that are just slapping AI on something. the first layer is chatgpt wrappers, maybe with a little prompt engineering, and that shit was DOA already. they add nothing of value.
the second layer is RAGs, where they at least integrate your existing knowledge base and database. as that matures, people are calling them "shitRAGs". these companies will be wiped out by generic AI platforms. even worse, no one wants to give up all that private data to a third party company, which is why everyone is doing this shit in-house. and increasingly, far left bias is making this shit bad for business, so people are implementing open source uncensored models.
Surely they don't let you talk about that at ycombinator.
they have a podcast, and they talked about bias in AI a few times now. one of the girls starts going into DEI discussions, one of the guys kinda rolls his eyes, and immediately goes into talking about huggingface and companies using open source uncensored models.
he cut off there, but for anyone in the AI space, this is a hot topic. "trust and safety" and "ethics in AI" teams are just political assholes with no technical skills, and all they're doing is adding far left bias to everything, often to the point where it's less useful than something based in reality, or even not useful at all.
not even talking about race and IQ or race and criminality... one easy example here is fast credit scoring that requires minimal info and no hit to the credit report. finance models operate based on averages, so if the inputs can reliably correlate to an average net profit, it doesn't matter what those inputs are or why. one of the big models here is taking zip code and correlating that to HHI and ultimately creditworthiness. the regression analysis for this is quite simple, low level machine learning. but "ethics in AI" teams call this racist. at a minimum, these policy wanks want to force an output where all races have equal creditworthiness in the aggregate (they want to go farther than that, penalize some races, but they don't do that until they get to the same goal post). the problem is not only is there a high correlation between zip code and creditworthiness, there's also a high correlation between zip code and race... because there's a high correlation between creditworthiness and race.
so if you're running a business using censored, leftist AI like gemini or chatgpt, your AI is shit. it will lie and elevate certain people solely on the basis of race, and/or demote other people solely on the basis of race. and in the marketplace, you will get crushed by anyone who doesn't use censored, leftist AI like gemini or chatgpt, because reality does not have that liberal bias. black people don't suddenly act more creditworthy just because the algorithm approves them for credit.
YC doesn't want to invest in a company that's going to be hamstrung because of any bias, and that includes far left bias. this is another reason why they don't invest chat wrappers, and shitRAGs get identified and passed on frequently. especially for shitRAGs, it's only a matter of time before google adds gemini to data studio, and suddenly they're all out of business.
Explain your acronyms instead of assuming everyone knows what the fuck you're on about.
it stands for retrieval augmented generation, just means they added more info to it... literally in that fucking sentence you fucking faggot.
Don't take a condescending tone. You presented an acronym without sufficiently explaining it and got called on it. That's your failing, not mine.
literally the same fucking sentence, moron.
Cry more.
lol, you're the one crying because of your illiteracy. project more.
You're the one projecting here. Go read your original post again.
Rimworld could probably be fun with some kind of Chat-GPT running the little comments the pawns make (I use Speech Bubbles mod).
That is a cool idea. Don't use Chat-GPT though, RimWorld is too raw for it. All those pawns running around quipping "I'm just a language model.." or "Let's steer the conversation to something that inspires more equitable feelings"
Any of the advanced Claude models would be down for the wasteland of RimWorld.
Everything is fake and gay
this is my experience too. the suits want ai, and they don't care what it does. it's like everyone has gone full retard.
Very similar article from Ticker guy two days before too: https://market-ticker.org/akcs-www?post=251500
LOL even when complaining about the industry the Indian can't help copy someone else.
I mean, you told them to learn to code, what the fuck did you expect?
All of the tumblr-adjacent "creatives" seething that their mediocrity is getting replaced with a shell script is music to my ears.
My only problem with AI currently is how much of it is your standard search algorithm with a shiny new label attached. Gets all the idiot wall street lads going though.
The problem with AI currently is that every company is trying to shove it into every one of their products, even though AI is nowhere near at the level necessary for that to actually be productive or useful. Until AI can consistently draw hands correctly, or not recommend that you put glue in your cheese to make it thicker, I think we should lay off putting AI into everything under the fucking sun.
100%.
AI is a bubble. Most "AI" isn't even Machine Learning. They are barely functional pattern-recognition algorithms at best.
I have explicitly called out vendors whom have thrown "AI" as a buzzword at me, and the moment it was clear that I knew what I was talking about, they immediately said, "Yeah, it's not AI".
GETTA LOAD OF THIS GUY!
HE THINKS THERE ARE BACKUPS!
WHAT AN IDIOT!
I'm not even sure most people have a test environment.
Unfortunately I know tech people who think this way, everything they do now they just type into chat GPT. Basic emails and texts, every line of code, every task they try to run, etc.
So ... AI is the future of the business...
...I wanna mention ai to him...
A lot of stuff he's saying sounds like competency crisis. He can call it retrograde amnesia if he wants. Maybe the engineers are leaving because after a year they can't stand to maintain their own "old" code. (That took me 15 years, natch.)
I dunno bro. That shit is great for creating worthless "recommendation" or "character witness" letters.