We all know that trying to get non-pozzed responses to anything involving hot button political/social topics from Big Tech trained AI models is a fool's errand, but I'm wondering if anyone has found them to be of any use when it comes to programming. Despite what a number of my professors say, some of whom are definitely not diversity hires, I haven't found them to be of any use. Maybe it's because I'm only asking hard or niche questions when I can't find the answer elsewhere, but I haven't gotten any help from the bots in my programming tasks. The last time I tried it invented modules to a package out of thin air. Had those modules actually existed I wouldn't have needed to ask the question to begin with. From what I've seen the most it can do is help pajeets cheat in their programming 101 classes. Has anyone here had a different experience?
Comments (37)
sorted by:
I think to get much use out of them, you'd need to have such a simple task that it could be accomplished in only one or a handful of functions. Something akin to code snippets you might find and adapt from Stack Overflow.
For anything more complex, you'd have to describe it in such a level of detail that you might as well just be writing code at that point. And if the AI did manage to spit out a working program that appears to do what you ask, you'd need to do a shit ton of verification on it to make sure it actually does, in fact, do what you ask.
As a simple example, "I need to do a bunch of work on one thread and store the results as they become available, then display the status and the results of that work in a UI, and it all needs to be thread-safe" isn't something I see an AI being able to do, ever, unless you give it exact, specific detail about absolutely everything. And giving exact, specific detail about absolutely everything is what we call "programming."
Yeah, that's pretty much my thinking on the matter. Even if it did write code that seems to work I'd be so distrustful of it (especially since we know it lies about political topics and makes shit up when it doesn't know) that I'd end up doing more testing and verification than if I just wrote the code myself. Not to mention how much easier debugging is when you wrote the code yourself and know your thought process.
I think this is kind of the heart pf the matter, though I disagree with you as to extents.
The earliest compilers changed code from something like "MOV AX, 1" to the exact processor opcodes using the exact registers you specify as a programmer. Today, almost nobody ever needs to use assembly language. I think the last time I used it was in school, writing a Towers of Hanoi calculator!
Then we have languages like C that are a step more abstracted. Especially in the early days of C, and before things like pipelining, threading, etc., became so prevalent, a good C coder could more or less predict the exact assembly code that given C code would produce.
Of course there were other early languages, like LISP, that were far more abstracted, and as we move forward to modern languages, almost all of which implement some degree of functional programmming, that tie between assembly code and the code you write is impossible to discern.
Essentially, our programming languages keep getting more and more abstracted.
LLMs are, imho, the next step in this evolution. They are not perfect. They are still improving--very rapidly--but we seem to be on the precipice of a world where a programmer describing a problem algorithmically to an LLM can get very solid results.
Think of the LLM as a new type of compiler that compiles english into instructions. It's almost the realization of Larry Wall's dream with Perl!
The horror that many programmers feel today is akin to what the assembly experts felt at the unoptimized code streaming out of compilers.
I rarely speak in absolutes, but imho, anyone who says this technology is crap is just ignorant. We are 1-2 years in to see what LLMs can do. What will they be like in 5 years? 10 years? 20 years? Even with simply linear improvements, LLMs are going to have a huge impact on coding for the foreseeable future.
The problem with using natural languages to describe functionality is that natural languages have ambiguities. We already have a game of telephone when the customer tells the PM (or whomever) what they want and then that is eventually relayed to the programmer.
All of the abstractions built on top of machine code are still 100% unambiguous and can deterministically be converted into a lower-level compiled output, whether it's machine code, something that can be JIT compiled to machine code, or some interpreted VM bytecode.
And yeah, the need to write assembly code is largely gone now -- the last time I did was to optimize a blending algorithm to use MMX instructions to calculate 4 pixels simultaneously. 1999 or thereabout. Got a better than 4x performance improvement out of it likely because it was able to make more efficient use of memory fetches 128-bits at a time and MMX instructions have dedicated CPU resources that allows for more other types of instructions run in parallel.
Where the translation happens between language abstractions and machine code is unimportant, and is mostly irrelevant to the discussion IMO. At some point, you need to be able to tell the compiler, "I need a resizable list of 64-bit floating point numbers," and we already have a pretty concise way of doing that: std::vector<double> (or whatever strongly-typed language you prefer -- not interested in getting into debates about duck-typed languages, which just reinforces my point that you can end up with ambiguity).
And I never said the tech is crap, I just don't think it will ever be a replacement for humans in this case. Just because you can't use a hammer as a screwdriver doesn't mean the hammer is crap. Maybe copium on my part, but I'm more worried about losing my job to Laquisha or Sundar because they're the correct color, than to an AI because it's better than me.
I agree that it is not replacement for humans. I agree it's not a replacement for programmers. I've posted a bunch on this article today, and I've tried to be really clear that I view LLMs as a tool and one that you need to be good at your job to know how to use correctly.
My hope is actually that it will put a major dent in the outsourcing industry, because the kind of menial shit programming that so many outsourcing firms do can be done much more time efficiently inhouse.
The people (not you) who are putting their fingers in their ears and saying "my job is safe!" and pointing at the people who talk about AGI are entirely missing the point. LLMs are a tool, they're improving rapidly, and like it or not, they WILL impact how programming is done.
Ironically it's Laquisha or Sundar that should lose their jobs to AI.
I've been using https://deepai.org/chat a lot at work, where I can't access my own models. It's saved my time writing templates for SQL, Python, and Powershell scripts. It's good for tight, well defined tasks. Like you said if you ask it something obscure or elaborate, it starts making things up.
The most common tasks I give it are reformatting SQL queries, rearranging new queries around the same data, or transposing column output. Basically it's an advanced macro tool.
Yeah, that's what I've been using. There's no way I'm making an account and allowing Big Tech pedos to monitor what I'm doing, especially considering the awful shit I say to that bot lol.
Didn't a law firm use ChatGPT for legal research, only for it to make shit up wholesale and pull cases out of its ass?
Yup, and they got reamed out by the judge for it IIRC.
Yep, an idiot lawyer used ChatGPT not just for legal research but for legal analysis AND legal writing, and ... well ... stupid in, stupid out.
What's changed in the last 6 months? Both WestLaw and LexisNexis are releasing their own AI tools with safeguards.
Lexis+ AI and Westlaw Precision
Just like how when you ask chatGPT-4 math questions now it spits out an LLM answer and then invokes an equation solver/calculator process, these AIs do citation checking, etc, to make sure their answers are not complete nonsense.
Supposedly GitHub copilot is good, but I haven't actually used it myself. ChatGPT is hit or miss because it does like to just make stuff up.
Yes, I find it very useful for speeding up a lot of rote coding. You've got to be a good coder to start with to use it effectively, and you need to be able to break down a problem algorithmically. As long as you have those skills, GPT can be great.
As an example, I had to create a small website w/ backend recently. This was something with a very specific function and it was going to be used by small (less than 10) people, but it would be immensely helpful for solving a problem in their workflows.
My prompt to chatGPT was more or less
"Write javascript code, using modern idioms, to make restful AJAX calls using POST, and DELETE. The target URL is ______. Here is the HTML of my form:
[paste in raw html from my form]
Make sure the that AJAX code hgas full error checking and sanitization of inputs.
After getting a response to the POST, parse the return value as JSON. If the value of ___ is ____ then display message "abc def".
etc
I'll look at the code it generates, and tell it to fix any issues, and then I'll add more complexity "Now make it handle this case, where the POST value is .... " (you can honestly start to get pretty vague once you are a query or two in).
Then I can say "Generate code in _____ language use MySQL bindings to implement a backend to implement target URL _____.
.
I guess you could say I'm using it like an advanced templating engine. You can keep these queries going a long way it's pretty nice.
Another thing I'll add you can do that I have found VERY useful.
You can paste in SQL table schema for multiple tables. You can then post in some queries and say "optimize this SQL query, given the above tables."
I had one very complicated query with like 10 tables, weird conditionals, weird aggregates, etc., and I just could not make the thing perform any better. A friend tried too.
ChatGPT did an amazing job optimizing the query. It was amazing. Out of about a dozen queries I tried this on, chatGPT got 2-3 MUCH faster, ~2 faster, and then did nothing for the rest.
You could also run the query analyzer of just about any modern database engine since about 2000 and it would give you not only suggestions for how to improve it, but also note any missing indices that it could use to potentially speed it up by orders of magnitude.
Yeah, I did explain/explain analyze, I did index analysis. This was a complicated query with a lot of joins and subqueries. ChatGPT completely reorganized it. It was great.
I've used it for python scripts and it's worked, but it still takes time and it constantly forgets pieces of code. You have to test what it gives you continually because it will break it. I mainly had it make scripts for scraping and excel manipulation. Those probably work well enough because it's very common on stack overflow etc.
I work professionally in a shop that’s making use of amazons AI tool.
I’m seeing the same things you are. The code it spits out, while eerily identical to my style/naming conventions is almost always worthless about 80% of the time. Either generating a simplistic solution that doesn’t actually do what I need, creating module/function names out of thin air (surprised me the first time I saw that - “whoa - I didn’t know that library could do that, excellent… that makes this task a lot easier and… oh… it doesn’t…”), or writing one line of code that saves me all of 5 seconds.
Where it works for me is for simple, grunt tasks like writing a switch statement to transform an object. Or for when I have coders block or a function I don’t want to have to write it can sometimes spit out some code that gets me far enough along I can complete the task. IF it’s simple enough. If you’re doing asynchronous call handling forget it!
I have tried. I had to work harder to get it to give me something that works and matches what I need than I would have to write it myself. Granted, I didn't really try very hard. I just tried a bit, got useless results, said "this is dumb" and did it myself.
That was the first inkling I got that LLMs are massively overhyped. Now I struggle to think of how I can use them at all, outside of dicking around and seeing what they spit out.
I'm coming to the conclusion that they're fun toys to play (they would be even more fun if they weren't censored by woketards) but completely unreliable for anything where the results actually matter. It's telling that everyone I talk to says it's great for <field the person doesn't have a PhD in> but useless in the field they actually know best. It's like Wikipedia in that sense, and this board in particular can attest to how pozzed Wikipedia is.
But if you want to write limericks, man it's amazing!
GitHub co-pilot is really useful. it doesn't do anything as advanced as write a whole system for you, but it's really good at filling in the blanks and typing out repetitive things while you work.
If you say that, then you are speaking from a position of ignorance, inexperience, or both.
Show us. I think people will be happy if you do.
I explained in more detail about one recent usage l in this post: (not sure if there's a better way to link)
https://kotakuinaction2.win/p/17sPFoXxFo/x/c/4Z8kybNWU3r
Here are a series of chatGPT-4 queries I did today. I'm not an Excel wizard, so this was helpful. The answers worked perfectly with no edits. I could have figured this out, but it probably would have taken me 15-20 minutes (the third query with summary by month was tricky). Not exactly programming, but programming adjacent!
"I have an excel spreadsheet with a sheet named "Details". There are many rows. Row 1 is a header. Column C is a "Category" column. In another worksheet I want to list all the unique Categories from sheet Details AND how many rows have that category."
follow up
" In sheet Details, Column G is called "Widgets". This maybe a value or it may be blank. I want to summarize how many of each category has a value in the widgets col"
another follow up
"Another. Column B is a date/time col. summarize the number of rows for each month."
Lol.
I both agree and disagree with this statement.
No, it is not sentient nor is it operating from anything like a position of sentience or AGI. Any one who says this or thinks this, cough journalists cough, are not worth reading.
Detractors of the technology latch onto this as a strawman argument to attempt devalue the entire technology.
It's a tool. It has proper and improper uses. It has things it is good at and things it is not good at. Between code templating, SQL optimization and generation, error checking, etc., my productivity is improving today.
If you refuse to even look at a tool because some mouthbreather on the Internet thinks chatGPT is going to replace novelists and become skynet, that's your error of reasoning.
It can, however, be quite clever!
Lol, one last thing. I inherited an insane Excel spreadsheet...multiple worksheets, years of data, etc.
pasted in some of the schema, gave some workbook names, described what I wanted to do, and it came up with some spot-on xlookup() vlookup() functions. Pivot tables make my eyes cross, and this took about 60 seconds to finish.
I think I've written enough on the topic, but I've given some explicit examples, etc., in other posts. Look at my recent posts or so if you want to see the rest of my thoughts.
And tools for that have already existed for many years.