We all know that trying to get non-pozzed responses to anything involving hot button political/social topics from Big Tech trained AI models is a fool's errand, but I'm wondering if anyone has found them to be of any use when it comes to programming. Despite what a number of my professors say, some of whom are definitely not diversity hires, I haven't found them to be of any use. Maybe it's because I'm only asking hard or niche questions when I can't find the answer elsewhere, but I haven't gotten any help from the bots in my programming tasks. The last time I tried it invented modules to a package out of thin air. Had those modules actually existed I wouldn't have needed to ask the question to begin with. From what I've seen the most it can do is help pajeets cheat in their programming 101 classes. Has anyone here had a different experience?
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (37)
sorted by:
I have tried. I had to work harder to get it to give me something that works and matches what I need than I would have to write it myself. Granted, I didn't really try very hard. I just tried a bit, got useless results, said "this is dumb" and did it myself.
That was the first inkling I got that LLMs are massively overhyped. Now I struggle to think of how I can use them at all, outside of dicking around and seeing what they spit out.
I'm coming to the conclusion that they're fun toys to play (they would be even more fun if they weren't censored by woketards) but completely unreliable for anything where the results actually matter. It's telling that everyone I talk to says it's great for <field the person doesn't have a PhD in> but useless in the field they actually know best. It's like Wikipedia in that sense, and this board in particular can attest to how pozzed Wikipedia is.
But if you want to write limericks, man it's amazing!