We all know that trying to get non-pozzed responses to anything involving hot button political/social topics from Big Tech trained AI models is a fool's errand, but I'm wondering if anyone has found them to be of any use when it comes to programming. Despite what a number of my professors say, some of whom are definitely not diversity hires, I haven't found them to be of any use. Maybe it's because I'm only asking hard or niche questions when I can't find the answer elsewhere, but I haven't gotten any help from the bots in my programming tasks. The last time I tried it invented modules to a package out of thin air. Had those modules actually existed I wouldn't have needed to ask the question to begin with. From what I've seen the most it can do is help pajeets cheat in their programming 101 classes. Has anyone here had a different experience?
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (37)
sorted by:
Didn't a law firm use ChatGPT for legal research, only for it to make shit up wholesale and pull cases out of its ass?
Yep, an idiot lawyer used ChatGPT not just for legal research but for legal analysis AND legal writing, and ... well ... stupid in, stupid out.
What's changed in the last 6 months? Both WestLaw and LexisNexis are releasing their own AI tools with safeguards.
Lexis+ AI and Westlaw Precision
Just like how when you ask chatGPT-4 math questions now it spits out an LLM answer and then invokes an equation solver/calculator process, these AIs do citation checking, etc, to make sure their answers are not complete nonsense.