I started working as a data scientist in 2019, and by 2021 I had realized that while the field was large, it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes despite insisting that things like, I dunno, the next five years of a ten thousand person non-tech organization should be entirely AI focused. The number of companies launching AI initiatives far outstripped the number of actual use cases. Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders.
And then some absolute son of a bitch created ChatGPT, and now look at us. Look at us, resplendent in our pauper's robes, stitched from corpulent greed and breathless credulity, spending half of the planet's engineering efforts to add chatbot support to every application under the sun when half of the industry hasn't worked out how to test database backups regularly. This is why I have to visit untold violence upon the next moron to propose that AI is the future of the business - not because this is impossible in principle, but because they are now indistinguishable from a hundred million willful fucking idiots.
I like this guy. :D
Nah, the only thing he's wrong about is he's waayyy overstating AI's capabilities in terms of LLMs. LLMs are worthless junk, but NNs and DeepLearning and other shit is actually decently useful and decently well deployed in areas such as recognition software where it actually makes sense.
-- Actual Software Engineer who understands what a prediction engine (also known as the entirety of "AI") does and how it works.
Nvidia's 4k upscaling is, I believe, based on neural nets, and it works very well. It really improves the quality of video, and games can be scaled undetectably sometimes.
I see a near term future where Expert Systems hand off specific tasks to Convolutional Neural Networks and then the final product is put into words by a LLN.
This would make tools for specific jobs with repeatable, auditable results.
For example, drafting legal documents, including contracts could be done this way. Another job might be triage and early diagnosis. A specialist triage nurse could be greatly aided by a system that is helpful for picking up rare conditions or non-standard presentation of conditions.
The idea of general AI where people follow the orders of machines programmed to the specification of the Pointy Haired Boss (from Dilbert) is probably what we will get instead. Welcome to the future.
I think your failing to see the point that a major portion of "AI" is just straight up not AI or ML, and is literally just people lying about what the programmers would have built anyway.
Oh I know, I am well aware of marketing bullshit, it's common in the industry to use it to search for VC money.