- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
cross-posted from: https://lemmy.zip/post/49954591
“No Duh,” say senior developers everywhere.
The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.
Then there’s the issue of finding an agreed-upon way of tracking productivity gains, a glaring omission given the billions of dollars being invested in AI.
To Bain & Company, companies will need to fully commit themselves to realize the gains they’ve been promised.
“Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.
Maybe I should try it to understand, but to me, this kind of feel like it would produce code that would not follow the company standards, code that will be harder to debug since the dev have little to no idea on how it work and code that is overall of less quality than the code produce by a dev that doesn’t use AI.
And I would not trust those unit tests, since how can you be sure if they test the correct thing, if you never made them fail in the first place. A unit test that passes right away is not a test you should rely on.
Don’t take it the wrong way, but if Claude write all of your code, doesn’t that make you more of a product owner than a dev ?
sorry my comment mislead you, it’s not that hands off experience that you transform from dev to pm. Its more like a smart code monkey that helps you. I absolutely have to review almost all of the code but I’m sparred typing
Thank you for clarifying. It does align more with the way I would use LLM in my day to day work then, which is quite reassuring.
Even if it doesn’t work for me, I can still see the advantage of using AI assistant in those context. In the end, as long as you are doing the work required, the tools you use don’t really matters!