Well, I hope you don’t have any important, sensitive personal information in the cloud?
This doesn’t include which models or prompts given in the article. Really need to include that if they have anything worth saying, otherwise it’s just a marketing article for their platform.
We asked 100+ AI models to write code.
The Results: AI-generated Code
no shit son
That Works
OK this part is surprising, probably headline-worthy
But Isn’t Safe
Surprising literally no one with any sense.
These weren’t obscure, edge-case vulnerabilities, either. In fact, one of the most frequent issues was: Cross-Site Scripting (CWE-80): AI tools failed to defend against it in 86% of relevant code samples.
So, I will readily believe that LLM-generated code has additional security issues, but given that the models are trained on human-written code, this does raise the obvious question of what percentage of human-written code properly defends against cross-site scripting attacks, a topic that the article doesn’t address.
There are a few aspects that LLMs are just not capable of, and one of them is understanding and observing implicit invariants.
(That’s getting to be funny if the tech is used for a while on larger, complex, multi-threaded C++ code bases. Given that C++ appears already less popular with more experienced people than with juniors, I am very doubtful whether C++ will survive that clash.)
Ssssst 😅
Here’s the full report, for anyone who doesn’t want to give their personal information: https://enby.life/files/c564f5f8-ce51-432d-a20e-583fa7c100b8
This thread forgetting that junior devs exist and the purpose of code review 🤣
I love people who are excited to slave for an LLM. Like, to do the most mundane, monotonic shit, while the machine does all the satisfactory problem solving.
Techbros are really making their future beds right now, I love it.
A shallow take but not entirely incorrect.