There's no way the deep state would NOT slip their tentacles into this technology. Use it to help you get some code down or whatever but don't trust it with anything damaging obviously
This technology isn't entirely some organic development - they have pumped trillions of dark money into the tech sector for decades. OpenAI is just this latest development probably born out of technology developed to sort through all the masses of data they are collecting every day. In the 90's it was Microsoft - now an operating system they control dominates desktop PC's. They did the same with Apple and Google in the late 00's to make sure they controlled all the smartphones. Now they are doing the same with these AI tools they want to sell everyone and get them dependent on.
I have the paid version; you'd be surprised at just how much better at coding GPT-4o is than the free version, and what it can do. It's entirely possible to put a 10k line source file in, say "write me a function that does X, Y, and Z, using only subroutines, data types, and object classes in the above source file", and get the right answer on the first try. It can also reliably port code from one language to another, or clean up code by moving repeated actions into their own macros or functions, or (probably most helpful) make unit test suites for a codebase. I use it very frequently for all of the above.
What it can't do - yet - is creativity. When I go in knowing exactly what functions I want it to write, I can get it to do all the 'boilerplate' stuff and get much more code written per day than I could otherwise. But if I were to try e.g. a Project Euler problem that I don't know how to solve, and just say "write a script that solves this problem", it'll come up with the obvious brute-force solution (that will take a hundred years to run and requires an exabyte of RAM), but not come up with any of the clever optimizations and shortcuts and mathematical equivalencies needed to actually solve the problem.
If you don't know what good code looks like and you can't stay strict in telling it your requirements, it'll give you dangerously misleading almost-correct code that's worse than not having anything at all. You shouldn't use it for anything where you can't easily check if it's right.
But if you do know what good code looks like and what you want, it's a genuine gamechanger, and for the projects I use it on, I'm at least twice as productive as I am on the projects I don't use it for.
Obviously, I assume that everything written in there is stored forever, and I don't use it for anything private. But if I'm writing code for open-source projects, it's going to scrape that code sooner or later anyway, so there's not much difference.
There's no way the deep state would NOT slip their tentacles into this technology. Use it to help you get some code down or whatever but don't trust it with anything damaging obviously
This technology isn't entirely some organic development - they have pumped trillions of dark money into the tech sector for decades. OpenAI is just this latest development probably born out of technology developed to sort through all the masses of data they are collecting every day. In the 90's it was Microsoft - now an operating system they control dominates desktop PC's. They did the same with Apple and Google in the late 00's to make sure they controlled all the smartphones. Now they are doing the same with these AI tools they want to sell everyone and get them dependent on.
It's pretty shit at coding from all accounts I've heard and from all the code I've had to correct that came from it.
I have the paid version; you'd be surprised at just how much better at coding GPT-4o is than the free version, and what it can do. It's entirely possible to put a 10k line source file in, say "write me a function that does X, Y, and Z, using only subroutines, data types, and object classes in the above source file", and get the right answer on the first try. It can also reliably port code from one language to another, or clean up code by moving repeated actions into their own macros or functions, or (probably most helpful) make unit test suites for a codebase. I use it very frequently for all of the above.
What it can't do - yet - is creativity. When I go in knowing exactly what functions I want it to write, I can get it to do all the 'boilerplate' stuff and get much more code written per day than I could otherwise. But if I were to try e.g. a Project Euler problem that I don't know how to solve, and just say "write a script that solves this problem", it'll come up with the obvious brute-force solution (that will take a hundred years to run and requires an exabyte of RAM), but not come up with any of the clever optimizations and shortcuts and mathematical equivalencies needed to actually solve the problem.
If you don't know what good code looks like and you can't stay strict in telling it your requirements, it'll give you dangerously misleading almost-correct code that's worse than not having anything at all. You shouldn't use it for anything where you can't easily check if it's right.
But if you do know what good code looks like and what you want, it's a genuine gamechanger, and for the projects I use it on, I'm at least twice as productive as I am on the projects I don't use it for.
Obviously, I assume that everything written in there is stored forever, and I don't use it for anything private. But if I'm writing code for open-source projects, it's going to scrape that code sooner or later anyway, so there's not much difference.