Valid answers are “never”, “soon” or “already have - decades ago”, depending on where you draw the boundaries on definitions of AI and programmer.
Because AI means something completely different every decade, and the modern programmer works at a different level every decade.
Soon: While a decent programmer can code at a level beyond the capabilities of LLM based generators (which I assume is AI in this context), some companies employ literal hordes of programmers that fail the simplest programming tasks, like the fizzbuzz test. Their output is to slowly cut and paste haphazard bits of code from StackOverflow, internet forums, and code found lying around the company, and make something that after several bounces off Quality Assurance and adjustments by senior developer review, pass. It’s not a stretch to see that it’s mostly a matter of turning current AI tech into streamlined products to take over these parts. Are these the average programmers? There are many of them, so it could be!
Already have - decades ago: An average programmer in the early 1950s spent a lot of time taking specific tasks like “this module needs this specific pattern of input and must produce this specific pattern of output” and painstakingly turning them into machine code. They would pen and paper out logic like
LDA 10; JSR FFD2; RTC;
, turn it intoA9 10 20 D2 FF 60
by referencing the manual and storing this byte sequence in a punch card or directly into machine memory addressC000
. Programs were much larger than this of course, and the skill lied in doing it correctly and making optimizations to the program so it would run well. Assemblers took over parts of this work, compilers took over more of it, then optimizing compilers and high level languages took the work away completely. These tools fit the 1960 era definition of AI. Now all people had to do was write a prompt to the AI likevoid main() { while(true) println("yes"); }
and it would do the programmer job for you.Never: Programmers will, as they always do, use this tool for everything it’s worth and work on a higher level to make bigger things faster. AI can write reams of mostly-working code? Programmers generate entire sections of mostly-working code and spend their time filtering, adjusting, completing and shaping it.
My take is that it’s somewhat of a gimmick and will continue to be for a very long time.
It can write functions which do the thing you want a lot of the time, yes. But for the entire codebase it’s very important to write it in a maintainable way. E.g. It’s super important to name variables things which the developers understand, and that requires a very solid understanding of the specific domain or business the dev is working in, as well as understanding the way the dev’s mind works too. AIs don’t know anything about that and aren’t psychic.
There are numerous other psychological aspects of coding like this which differentiate a good developer from bad ones. Even when AIs write entire programs, the human dev is going to be the one maintaining it and getting pages at 2am when something goes wrong. I don’t think anyone will trust that much in an AI for a very, very long time.
For me, at least, it’s easier to write the code I want from scratch instead of trying to understand what others wrote and improve it. As they say, what one dev can do in one day, two devs can do in two days.
I pick never. AI just means increased productivity. Not less jobs.
I agree. Software remains a growing field.
Well eventually increased productivity could mean less jobs. Likely in fact. Just maybe not in our lifetime.
The average person who has programmed something, or the average professional career programmer? I doubt the AI will be great a dealing with the really weird bugs and conflicts any time soon.
Yeah, and understanding the context of a massive codebase will give it a ton of challenges
It’ll probably just rewrite the spaghetti the human did and not even bother fixing the bug.
In order for an AI to know what code to scrape from stack overflow a user must be able to articulate what they want the program to do, now we all know they can’t so I doubt AI can for quite some time.
In order for an AI to know what code to scrape from stack overflow
It’s a common assumption that gpt is only cutting and pasting what it found on Google. But it’s not true. I spent hours trying to find help with vba for Excel ( because I know neither) with no results other than function definitions. Gpt gave me working code that wasn’t anywhere on the Internet. It had to have pieced together the code based on the well documented function definitions.
I see it like Dalle and those other ai art programs. You can see the style they are copying to create the picture. But the ai generated pictures are not cut and pasted from images already on the Internet.
You do have to be careful though. Sometimes it gives functions that don’t exist.
PowerShell has a well established naming scheme of get-[function] or set-[function] so when you ask GPT to create a powershell code to set the name of a file it will use set-filename but that doesn’t exist.
I do believe that you can use LLM to assist in program creation but doubt an end user can articulate in full what they want a program to do.
Individual scripts/modules and even simple microservices: not long… provided the AI isn’t actively poisoning itself right now.
Writing, securing, and maintaining complex applications: We’d need another breakthrough.
Since my role is often solutions architecture I’ve been worried about cloud systems engineering being something that’s immediately vulnerable. But after working with AWS’s Q for a couple hours, I am less worried. But if someone made an AI to create a cloud provider that is well (and accurately) documented, consistent in functionality and UX, and which actually has all the features that get announced in its own blog posts; then AI might be able to run it.
If my company were to fire me and try to replace me with a LLM, I’d simply wait a month or two and then offer to do my old job at a contract rate of at least 5x of what my current salary works out to. And I’d get it.
Remember when Elon made all driving jobs obsolete with AI in 2012, 2014, 2015 and 2017?
Pepperidge Farms remembers
From some of the code I’ve had to review, we may already be there…
All that means is that somebody else has written better code than your colleagues.
I’ve had a lot more success in debugging than in writing code. I had a problem with adjusting the sample rate of a certain metrics framework in a java application, and stackoverflow failed me, both when searching for an aswer and when asking the question. However, when I in some desperation asked GPT 3.5, I received a great answer which pinpointed the necessary adjustment.
However, asking it to write simple code snippets, i.e. for migrating to a different elasticsearch client framework, has not been great. I’m often met with the confident wrong answers.
Yeah not soon, 3-10 years I’d guess. The latest research tracking AI growth says that our best models can solve entry level CS problems at 85% success. That’s not good
It’s obvious that humans do more than just pattern matching. I think I would rate the current systems as a 25% speedup to my workflow, not bad, but only for menial tasks
Google Gemini Powered AlphaCode 2 Technical Report
HumanEval achieved 74.4%, surpassing GPT-4 at 67%. It successfully solves 43% of problems in the latest Codeforces rounds with 10 attempts. The evaluation considered the time penalty, and it still ranks in the 85th percentile or higher. AlphaCode 2 already beats 85% of people in top programming competitions (which are already better than 99% of engineers out there). So, I believe AI already writes better short code than the average programmer, but I don’t think it can debug any code yet. I’d say it will need a platform to test and iteratively rewrite the code, and I don’t see that happening earlier than 3 years.
I actually have used it to debug code before. Not an entire program (yet) but it’s great for snippets where you’re just missing a semicolon or bracket, or need advice on how to properly call a weird function. It also writes small things like batch files incredibly well. Just like with regular language, it’s great for a few paragraphs, then begins to drift as it struggles to parse longer conversations. So if you only need it for a few “paragraphs” of code, it’s great.
deleted by creator
Probably already does.
This doesn’t say what you think it does. It simply indicates that you have no idea what you’re talking about.