With all the recent hype around AI, I feel that a lot of people don’t understand how it works and how it is useful. AI is useful at solving certain types of problems that are really difficult using traditional programming, like finding patterns that aren’t obvious to us.
For example, object recognition is about finding patterns in images. Our brains are great at this, but writing a computer program capable of taking pixels and figuring out if the pattern is there is very hard.
Even if AI is sometimes going to misclassify objects, it can still be useful. For example, in a factory you can use AI to find defects in the production line. Even if you don’t get it perfect, going from 100 defects per 1M products to 10 per million is a huge difference and saves the factory a lot of money.
Most useful application so far seems to have been to predict protein folding. Have to check up on that, it should allow to cure all sorts of bad things.
So are we not calculating the amount of training the junior dev took?
Junior dev just need to copy paste code from stackoverflow
They were trained before the task. The LLM was trained after.
LLM costs $20 a month and needed only 60 hours of training, junior dev has been at it for years, costs as much for a half hour, and still needed me to repeatedly explain what a rectangle is
If you’re paying someone $40 an hour who doesn’t know what a rectangle is then I think you’re the problem.
The problem is that he’s paying $40 an hour and for that you only get someone who knows what a ਆਇਤਕਾਰ is.
I’ve just worked for agencies that hire juniors and outsource. If you’ve seen what I’ve seen you’d change your tune
One key point here is: While you actually can replace a bunch of junior developers with AI in some places, any replaced junior developer will never become a senior developer that cannot be replaced by the AI because he/she is basically experince on two legs.
So, corporations, don’t complain about the lack of experienced, senior personnal because YOU have been the main reason they don’t exist.
You could say the same for a finite element model. A junior engineer with just 4 years of training can solve, explicitly, the deflection at the center of a slender, simple-simple beam of prismatic section and produce an exact (if slightly incorrect) answer. Building a FEM of the same can solve the problem and take longer (to make the model) with similar accuracy, both of which are good enough for design work.
Only a fool wouldn’t have a FEM around though, as it can solve problem that would take centuries for a human to solve. They may as well make a cartoon with the child digging a 3” hole in beach sand and then showing a backhoe making a jagged edged hole of the same size.
This looks more like a floating point issue than a mistake an LLM would make
There are no LLMs involved in this picture, to train an llm you’d need 100x the training data. The panel is about a normal ML model.
But a floating point issue is the exact type of issue a LLM would make (it does not understand what a floating point number is and why you should treat them differently). To be fair, a junior developer would make the same type of mistake.
A junior developer is, hopefully, being mentored by more senior coworkers who are extra careful with code reviews and would spot the bug for the dev. Machine generated code needs an even higher level of scrutiny.
It is relatively easy to teach a junior developer to write code that is easy to read and conforms to the teams style guide.
Decaf?? Wtf. Gross.
I feel this in AutoCAD (lack of) precision.
Decaf is heresy