For some background, I originally wanted to break into programming back when I was in college but drifted more into desktop tech support and now systems administration. SysAdmin work is draining me, though, and I want to pick back up programming and see if I can make a career out of it, but industry seems like it could be moving in a direction to rely on AI for coding. Everything I’ve heard has said AI is not there yet, but if it’s looking like it hits a point where it reaches an ability to fully automate coding, should I even bother? Am I going to be obsolete after a year? Five years?
As a professional C# developer since 2012, I’d say a programmer needs four kinds of knowledge. As an organizational user of Github Copilot for a couple months, I’d say AI tools can help with one, maybe two of those.
Understanding language and syntax, so you can communicate the ideas in your head to the machine accurately: AI is fairly good at this, will certainly get a lot better.
Understanding algorithms and data structures, well enough to compare and contrast, and choose the most appropriate ones for each circumstance: AI can randomly select something, unless it’s a frequently solved problem. I don’t expect this to get better except for the most repetitive of coding tasks.
Understanding your execution environment and adapting your solutions to use it well: I don’t see the current generation of AI tools ever approaching this. I don’t think they have context for how a piece of code is used, when trying to learn from it. One size fits all is not a great approach.
Understanding your customer’s needs and specific problems, and creating products, not code. Problem domains and solutions are a business’s entire reason for existence. This is all kept confidential (and outside the reach of an AI training data set) for competitive reasons. As a human employee, you get to peek behind the curtain and learn these things yourself.
I’ll add that one of the biggest hurdles/flaws to the fundamental architecture of the design of AI is their inability to reapply a specific use case in a slightly different scenario.
This was a flaw that came out of self driving development and, as far as I understand, plays a major role in why the hype on it died off really fast. Adding more to the model can’t fix that flaw.