We all know that trying to get non-pozzed responses to anything involving hot button political/social topics from Big Tech trained AI models is a fool's errand, but I'm wondering if anyone has found them to be of any use when it comes to programming. Despite what a number of my professors say, some of whom are definitely not diversity hires, I haven't found them to be of any use. Maybe it's because I'm only asking hard or niche questions when I can't find the answer elsewhere, but I haven't gotten any help from the bots in my programming tasks. The last time I tried it invented modules to a package out of thin air. Had those modules actually existed I wouldn't have needed to ask the question to begin with. From what I've seen the most it can do is help pajeets cheat in their programming 101 classes. Has anyone here had a different experience?
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (37)
sorted by:
The problem with using natural languages to describe functionality is that natural languages have ambiguities. We already have a game of telephone when the customer tells the PM (or whomever) what they want and then that is eventually relayed to the programmer.
All of the abstractions built on top of machine code are still 100% unambiguous and can deterministically be converted into a lower-level compiled output, whether it's machine code, something that can be JIT compiled to machine code, or some interpreted VM bytecode.
And yeah, the need to write assembly code is largely gone now -- the last time I did was to optimize a blending algorithm to use MMX instructions to calculate 4 pixels simultaneously. 1999 or thereabout. Got a better than 4x performance improvement out of it likely because it was able to make more efficient use of memory fetches 128-bits at a time and MMX instructions have dedicated CPU resources that allows for more other types of instructions run in parallel.
Where the translation happens between language abstractions and machine code is unimportant, and is mostly irrelevant to the discussion IMO. At some point, you need to be able to tell the compiler, "I need a resizable list of 64-bit floating point numbers," and we already have a pretty concise way of doing that: std::vector<double> (or whatever strongly-typed language you prefer -- not interested in getting into debates about duck-typed languages, which just reinforces my point that you can end up with ambiguity).
And I never said the tech is crap, I just don't think it will ever be a replacement for humans in this case. Just because you can't use a hammer as a screwdriver doesn't mean the hammer is crap. Maybe copium on my part, but I'm more worried about losing my job to Laquisha or Sundar because they're the correct color, than to an AI because it's better than me.
I agree that it is not replacement for humans. I agree it's not a replacement for programmers. I've posted a bunch on this article today, and I've tried to be really clear that I view LLMs as a tool and one that you need to be good at your job to know how to use correctly.
My hope is actually that it will put a major dent in the outsourcing industry, because the kind of menial shit programming that so many outsourcing firms do can be done much more time efficiently inhouse.
The people (not you) who are putting their fingers in their ears and saying "my job is safe!" and pointing at the people who talk about AGI are entirely missing the point. LLMs are a tool, they're improving rapidly, and like it or not, they WILL impact how programming is done.
Ironically it's Laquisha or Sundar that should lose their jobs to AI.