I started working as a data scientist in 2019, and by 2021 I had realized that while the field was large, it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes despite insisting that things like, I dunno, the next five years of a ten thousand person non-tech organization should be entirely AI focused. The number of companies launching AI initiatives far outstripped the number of actual use cases. Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders.
And then some absolute son of a bitch created ChatGPT, and now look at us. Look at us, resplendent in our pauper's robes, stitched from corpulent greed and breathless credulity, spending half of the planet's engineering efforts to add chatbot support to every application under the sun when half of the industry hasn't worked out how to test database backups regularly. This is why I have to visit untold violence upon the next moron to propose that AI is the future of the business - not because this is impossible in principle, but because they are now indistinguishable from a hundred million willful fucking idiots.
I like this guy. :D
Surely they don't let you talk about that at ycombinator.
they have a podcast, and they talked about bias in AI a few times now. one of the girls starts going into DEI discussions, one of the guys kinda rolls his eyes, and immediately goes into talking about huggingface and companies using open source uncensored models.
he cut off there, but for anyone in the AI space, this is a hot topic. "trust and safety" and "ethics in AI" teams are just political assholes with no technical skills, and all they're doing is adding far left bias to everything, often to the point where it's less useful than something based in reality, or even not useful at all.
not even talking about race and IQ or race and criminality... one easy example here is fast credit scoring that requires minimal info and no hit to the credit report. finance models operate based on averages, so if the inputs can reliably correlate to an average net profit, it doesn't matter what those inputs are or why. one of the big models here is taking zip code and correlating that to HHI and ultimately creditworthiness. the regression analysis for this is quite simple, low level machine learning. but "ethics in AI" teams call this racist. at a minimum, these policy wanks want to force an output where all races have equal creditworthiness in the aggregate (they want to go farther than that, penalize some races, but they don't do that until they get to the same goal post). the problem is not only is there a high correlation between zip code and creditworthiness, there's also a high correlation between zip code and race... because there's a high correlation between creditworthiness and race.
so if you're running a business using censored, leftist AI like gemini or chatgpt, your AI is shit. it will lie and elevate certain people solely on the basis of race, and/or demote other people solely on the basis of race. and in the marketplace, you will get crushed by anyone who doesn't use censored, leftist AI like gemini or chatgpt, because reality does not have that liberal bias. black people don't suddenly act more creditworthy just because the algorithm approves them for credit.
YC doesn't want to invest in a company that's going to be hamstrung because of any bias, and that includes far left bias. this is another reason why they don't invest chat wrappers, and shitRAGs get identified and passed on frequently. especially for shitRAGs, it's only a matter of time before google adds gemini to data studio, and suddenly they're all out of business.