I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).
Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.
- From my experience it’s great at doing things that have been done 1000x before (which makes sense given the training data), but when it comes to building something novel it really struggles, especially if there’s 3rd party libraries involved that aren’t commonly used. So you end up spending a lot of time and money hand holding it through things that likely would have been quicker to do yourself. - the 1000x before bit has quite a few sideffects to it as well. - lesser used languages suffer because there’s not enough training data. this gets annoying quickly when it overrides your static tools and suggests nonsense.
- larger training sets contain more vulnerabilities as most code is pretty terrible and may just be snippets that someone used once and threw away. owasp has a top 10 for a reason. take input validation for example, if I’m working on parsing a string there’s usually context such as is this trusted data or untrusted? if i don’t have that mental model where I’m thinking about the data i might see generated code and think it looks correct but in reality its extremely nefarious.
 - Its also trained on old stuff. - And because its old, you get some very strange side effects and less maintainability. 
- It’s decent at reviewing its own code, especially if you give it different lenses to look though. - “Analyze this code and look for security vulnerabilities.” “Analyze this code and look for ways to reduce complexity.” - And then… think about the response like it’s a random dude online reviewing your code. Lots of times it raises good issues but sometimes it tries too hard to find little shit that is at best a sidegrade. 
 
- this 
 
- I’m pretty sure every time you use AI for programming your brain atrophies a little, even if you’re just looking something up. There’s value in the struggle. - So they can definitely speed you up, but be careful how you use it. There’s no value in a programmer who can only blindly recite LLM output. - There’s a balance to be struck in there somewhere, and I’m still figuring it out. - I’m pretty sure every time you use AI for programming your brain atrophies a little, even if you’re just looking something up. There’s value in the struggle. - I assume you were joking but some studies have come out recently that found this is exactly what happens and for more than just programming. (sorry it was a while ago so I dont have links) - Doesn’t sound like they’re joking to me. 
- There are similar studies on the effects of watching a Youtube video instead of reading a manual. 
 
- This is literally the exact same argument made against using books and developing writing. 
 
- You can either spend your time generating prompts, tweaking them until you get what you want and then using more prompts to refining the code until you end up with something that does what you want… - or you can just fucking write it yourself. And there’s the bonus of understanding how it works. - AI is probably fine for generating boiler plate code or repetitive simple stuff, but personally I wouldn’t trust it any further than that. - There is a middle ground. I have one prompt I use. I might tweak it a little for different technologies, languages, etc. only so I can fit more standards, documentation and example code in the upload limit. - And I ask it questions rather than asking it to write code. I have it review my code, suggest other ways of doing something, have it explain best practices, ask it to evaluate the maintainability, conformance to corporate standards, etc. - Sometimes it takes me down a rabbit hole when I’m outside my experience (so does Google and stack overflow for what it’s worth), but if you’re executing a task you understand well on your own, it can help you do it faster and/or better. 
 
- In the grand scheme of things, I think AI code generators make people less efficient. Some studies have come out that indicate this. I’ve tried to use various AI tools, as I do like fields of AI/ML in general, but they would end up hampering my work in various ways. 
- I’m enjoying it, mostly. It’s definitely great at some tasks and terrible at orhers. You get a feel for what those are after a while: - 
Throwaway projects - proof of concepts, one-off static websites, that kind of thing: absolutely ideal. Weeks of dev becomes hours, and you barely need to bother reviewing it if it works. 
- 
Research (find a tool for doing XYZ) where you barely know the right search terms: ideal. The research mode on claude.ai is especially amazing at this. 
- 
Anything where the language is unfamiliar. AI bootstraps past most of the learning curve. Doesn’t help you learn much, but sometimes you don’t care about learning the codebase layout and you just need to fix something. 
- 
Any medium sized project with a detailed up front description. 
 - What it’s not good for: - Debugging in a complex system
- Tiny projects (one line change), faster to do it yourself
- Large projects (500+ line change) - the diff becomes unreviewable fairly quickly and can’t be trusted (much worse than the same problem with a human where you can at least trust the intent)
 
- 
- I’m not against AI use in software development… But you need to understand what the tools you use actually do. - An LLM is not a dev. It doesn’t have the capability to think on a problem and come up with a solution. If you use an LLM as a dev, you are an idiot pressing buttons on a black box you understand nothing about. - An LLM is a predictive tool. So use it as a predictive tool. - Boilerplate code? It can do that, yeah. I don’t like to use it that way, but it can do that.
- Implementing a new feature? Maybe, if you’re lucky, it has been trained on enough data that it can put something together. But you need to consider its output completely untrustworthy, and therefore it will require so much reviewing that it’s just better to write it yourself in the first place.
- Implementing something that solves a problem not solved before? Just don’t. Use your own brain, for fuck’s sake. That’s what you have been trained on.
 - The one use of AI, at the moment, that I actually like and actually improves my workflow is JetBrains’ full line completion AI. It very often accurately predicts what I want to write when it’s boilerplate-ish, and shuts up when I write something original. - Yes they do have the abikity to think and reason just like you (generally mush faster and slightly better) - https://medium.com/@leucopsis/how-gpt-5-compares-to-claude-opus-4-1-fd10af78ef90 - 96% on the AIME with zero tools. Only reading the question and reasoning through the answer - This is not true. They do not think or reason. They have code that appears to reason, but it definitely is not. - Once it gets off track it doesn’t consider that it is obviously wrong. - A simple math problem can fail and it is really obvious to a human for example. 
- Absolutely not. This comment shows you have absolutely zero idea how an LLM works. 
- No, they can’t think and reason. However, they can replicate and integrate the thinking and reasoning of many people who have written about similar problems. And yes, they can do it must faster than we could read a hundred search result pages. And yes, their output looks slightly better than many of us in many cases, because they are often dispensing best practices by duplicating the writings of experts. (In the best cases, that is.) 
 
 
- The hate is ridiculous as is the hype. - It’s a new tool that is often useful when used correctly. Don’t use it to write entire applications - that’s a recipe for disaster. - But if you’re learning a new language it’s amazing. You have an infinitely patient and immediately available tutor that can teach you a language’s syntax, best practices, etc. I don’t know why that would make you “ill” besides all the shame “real developers” seem to want to lump on anybody who uses AI. If you’re not concerned about passing some “I don’t use an IDE” nerd’s purity test you’ll be fine. 
- Its an absolute gamechanger, IMO - the research phase of any task is reduced to effectively nothing, and I get massive amounts of work done when I walk away from my desk, because I plan for and keep lists of longer tasks to accomplish during those times. - You need to review every line of code it writes, but that’s no different than it ever was when working with junior devs 🤷♂️ but now I get the code in minutes instead of weeks and the agents actually react to my comments. - We’re using this with a massive monorepo containing hundreds of thousands of lines of code, and in tiny tool repos that serve exactly one purpose. If our code quality checks and standards werent as strict as they have been for the past decade, I think it wouldn’t work well with the monorepo. - The important part is that my company is paying for it - I have no clue what these tools cost. I am definitely more productive, there is absolutely no debate there IMO. Is the extra productivity worth the extra cost? I have literally no idea. 
- From my experience it’s really great at bootstrapping new projects for you. It’s good at getting you sample files and if you’re using cursor just building out a sample project. - It’s decent at being an alternative to google/SO for syntax or previously encountered errors. There’s a few things it hallucinates but generally it can save time as long as you don’t trust it blindly. - It struggles when you give it complex tasks or not-straightforward items. Or things that require a lot of domain knowledge. I once wanted to see what css classes were still in use across a handful of react components and it just shat the bed. - The people who champion AI as a human replacement will build a quick proof of concept with it and proclaim “oh shit this is awesome!” And not realize that that’s the easy part of software engineering. 
- I use it mainly to tweak things I can’t be bothered to dig into, like Jekyll or Wordpress templates. A few times I let it run and do a major refactor of some async back-end code. It botched the whole thing. Fortunately, easy to rewind everything from remote git repo. - Last week I started a brand new project, thought I’d have it write the boilerplate starter code. Described in detail what I was looking for. It sat there for ten minutes saying ‘Thinking’ and nothing happened. Killed it and created it myself. This was with Cursor using Claude. I’ve noticed it’s gotten worse lately, maybe because of the increased costs. 
- Yes. But I’m not paying for premium like some of my cowokres. I use it to avoid the grunt work, and to avoid things I know I’d have to google. - I used some coworkers account for a while and auto complete is amazing. I it guesses wrong you just keep tipping as usual. If its right, hit tab and saves you like 20 seconds. - On the other hand I have cokowkers that do not check the chatgpt output and the PRs make no sense. Example: instead of making a variable type any (which is forbidden in our codebase) they did - Let a : date|number|string|object|(…) = fetchData() 
- I’ve had good luck having it write simple scripts that I could easily handle myself. For example, I needed a script to chop a directory full of log files up into archives, with some constraints. That sort of thing. 
 I haven’t tried it on anything more substantial.
 This was using Copilot because I haven’t found a good coding model that will run locally on 16GB VRAM.
- I use it now and again but not integrated into an ide and not to write large bits of code. - My uses are like so - Rewrite this rant to shut the PO/PM up. Explain why this is a waste of time - Convert this excel row into a custom model. - Given these tables give me the sql to do xyz. - Sometimes for troubleshooting an environment issue - Do I need it , no. But if it saves me some time on bullshit tasks then thats more time for me - My brother in programming, - please don’t use AI for data format conversion when deterministic energy efficient means are readily available. - an old man.
 - I’d never trust it to make the change but I recently asked for a Python script to do a change I needed and it did it perfectly first try (verified the diff). - Also I don’t know Python at all. - — a fellow old man 
- It was just an example to illustrate the point. I use specific convertors for actual format conversions. Actual uses have been map it to a custom data model . - You are right though , right tool for the job and all that. 
- 🙄 
 
- Sorry, I agree with other replier, but why would you use AI to convert from and to XML… when another more objective, reliable, and deterministic tool to convert exists for a long time. You know well how often LLM makes up stuff… - Clarified my point in the reply above . 
 
 
- I use llms from both ends. It helps me plan an think through complex code architecture and helps me do the little stuff i do too infrequent to remember. Putting it all together is usually all me. 
- I used supermaven (copilot competitor) for awhile and it was sorta ok sometimes, but I turned it off when I realized I’d forgotten how to write a switch case. Autocomplete doesn’t know your intent, so it introduces a lot of noise that I prefer to do without. - I’ve been trying out Claude code for a couple months and I think I like it ok for some tasks. If you use it to do your typing rather than your thinking, then it’s pretty decent. Give it small tasks with detailed instructions and you generally get good results. The problem is that it’s most tempting to use when you don’t have the problem figured out and you’re hoping it will, but thats when it gives you overconvoluted garbage. About half the time this garbage is more useful than starting from scratch. - It’s good at sorting out boilerplate and following explicit patterns that you’ve already created. It’s not good at inventing and implementing those patterns in the first place. 











