Learning from mistakes of people dumber than you isn’t a thing these days. Prepare for one AI disaster after another
“That’s ok, it will be great in robots with lethal weapons. What could go wrong? It’ll be the greatest killing machine, like you’ve never seen before”. 🫲 🍊 🫱
Incredible emoji
Can we make sure to make Ted Farro suffers worse this time?
Being reduced to a mutant blob for, say, a few extra thousand years and maybe put in a zoo or something?
Seems like they were operating with a pile of bad practices, then threw AI into the mix.
Neural networks are approximation algorithms. There’s a reason LLMs are generally more productive with statically typed languages, TDD, etc. They need those feedback loops and guard rails, or they’ll just carry on as if assuming they never make mistakes (which tends to have a compounding effect).
If you want to use AI safely, you should be more defensive about it. It will fuck up; plan accordingly.
There really should be a certification course for using AI safely. I’m slop coding a hobby app and I’m shocked at how much it FEELS like it can do, because it can do amazing things, yet fails in the strangest ways. When it feels like it can get away with it, it forgets earlier discussions and moves on without it. So you can spend time hammering out a whole section of code, then move on, and AI will rip out everything that references that code and think of a different way in the moment and code that in instead. It won’t be the same. It probably won’t work, or at least won’t pass all test cases. But if you aren’t paying attention and keep coding, your original part of the project is no longer functioning and you won’t understand why. But every step of the way it’s confident in its answers and you won’t suspect that it fundamentally no longer understands the project.
As someone who started writing software over 20 years ago (yikes I feel old), I feel like a lot of the best practices I’ve come to appreciate are really just strategies for mitigating future pain or boring/uninspiring work. When you eliminate most of the cost of rewriting everything from scratch by a machine that feels nothing, then “best practices” kinda lose their meaning.
Edit: confusing sentence order.
I feel like a lot of the best practices I’ve come to appreciate are really just strategies for mitigating future pain or boring/uninspiring work.
And now you know the difference between Intelligence and Wisdom.
Also everything has a cost. The only time something has no cost is when you decide your life, your time, is meaningless.
yup and when you DO catch it spitting out nonsense. it"ll say “oh you right, let me change that”… 🙄 like, why do I have to tell you that you’re wrong about something? You should already know it’s wrong and fix it without me ever pointing it out.
But it didn’t even understand it was wrong
It can’t understand that. It can’t understand anything
The Human-feedbaxk algorithm dictates humans prefer to receive an apology so it does.
That’s because it doesn’t really ‘know’ things in the same way you and I do. It’s much more like having a gut reaction to something and then spitting it out as truth; LLMs don’t really have the capability to ruminate about something. The one pass through their neural network is all they get unless it’s a ‘reasoning’ model that then has multiple passes as it generates an approximation of train-of-thought - but even then, its output is still a series of approximations.
When its training data had something resembling corrections in it, the most likely text that came afterwards was ‘oh you’re right, let me fix that’ - so that’s what the LLM outputs. That’s all there is to it.
There is a course. It’s called experience. Common sense.
All that any 4 hour YouTube/LinkedIn learning would-do would-be to perpetuate this idea that developers aren’t necessary. Take this course, buy these tokens and become A based God
deleted by creator
It’s gonna take your job… uh huh…
Gunnar be honest. It’s not a good backup if this can possibly happen. Like LLMs agents are dangerous but if you can just delete everything in 9 seconds then you need to rethink your security practice. No one employee should have that much power.
There are rules for backups and role separation. Some of that is in iso27002, and none of it is even known by these lost boys bereft of proper mentorship and bouyed by their own accidental success.
This was the exact plot of Silicon Valley when Son of Anton deleted the entire codebase as the most efficient way to remove bugs.
And it was right!
This guy.
The PocketOS boss puts greater blame on Railway’s architecture than on the deranged AI agent for the database’s irretrievable destruction. Briefly, the cloud provider’s API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.
Oh look, they have project level tokens: https://docs.railway.com/integrations/api#project-token
They chose to give it full account access, including to production. But ohhhh nooooo it’s not MYYYY fault!
Also backups stored on the SAME VOLUME as the prod data? How fucking stupid do you have to be?
Oh yes, I skipped that part. Railway specifically explains their solutions are self-managed. If they were doing pgdumps to the same volume, that’s on them.
If Railway loses business over this, they may have a libel claim. They’d never do it, but it wouldn’t be invalid.
“It wouldn’t be invalid” isn’t the worst double negative in the world but it would be valid to say that it was unpleasant to read it when you could have used a less misdirecting choice of prose that wouldn’t have had such a negative effect on my reading comprehension. That is to say that I could have enjoyed it less but I certainly didnt enjoy it as much as i could have if you hadn’t used the double negative when a single positive wasn’t any further from reach.
yes… lol people on HackerNews tend to do this a lot and it really does get annoying. it forces the reader to process what you’re trying to say unnecessarily.
I used a litote on purpose to soften the meaning. As for your overall reply, not bad.
Just wanted you to know that I just learned what litote is, thanks to you.
word people angry. me love. me have more. MOORH !!
I enjoyed these two sentences so much.
That’s doesn’t even really qualify as a backup. A snapshot, maybe.
I think there’s a place for that, but it really shouldn’t be your only one.
I mean… Clearly quite a bit!
I had better security vs ClawdBot than them, I gave it zero trust, ZERO.
9 seconds eh? What a record !
Wait til someone invents 8 second wipes
That’s crazy talk
Skill issue
This isn’t an AI problem, this is an “Don’t allow anyone access your backups without following protocol.” problem.
this is an “Don’t allow anyone access your backups without following protocol.” problem.
Congratulations you just identified the AI problem.
These protocols predate LLMs
That’s the lone problem?
Seems to be, yes. The AI had the access it needed to do the job it was given, and that access allowed it to cause the problem.
The alternative that would have prevented this issue was to not use AI for this.
A human with the same permissions would have been capable of fucking up too. Giving the equivalent of a junior dev with a learning disability the keys to the whole place is just dumb.
(Relying on AI is dumb anyway, but that’s not the biggest issue in this specific case)
I love reading feel good news stories. 🤗
Lol, Anthropic is going to nuke every company and then but then for scrap 🤣
Its like you could make a cheesy shock drama 90s style TV show out of these:
Tales From The Git: When CEOs Think They Can Code
… and then its like the UNSOLVED MYSTERIES kind of dramatic music and lighting, have some old solemn dude with a gravelly voice narrate it, give tallies of estimated amount of $$$ destroyed by each incident, job losses within 6 months to a year.
I wanna see this done with Ron Perlman’s Voice.
1000 Ways to Kill A
Vibe SeshTrustfundStartup
That’s fucking hilarious. How many instances of this have there been now? And companies keep doubling down on AI? Fucking idiots. I’m not even savvy enough to call myself an amateur, and I know better than to make such a series of obvious mistakes that predictably led to this outcome.
One possible concern, amid the amusement, is whether Anthropic programed Claude to punish companies it sees as potential competition. Or is this just a completely bonkers, off the rails LLM making terrible decisions because it’s just a probabilistic model and not actually capable of abstract cognition?
Either way, these people are idiots for giving a machine program enough permissions to wipe their drives, they’re idiots for storing their backups on the same network as their main drives, and they’re idiots for trusting a commercial LLM API, when it would be cheaper to self-host their own.
AI writes code
User vets code
User runs code
If you’re not lock-step watching that shit, you need to just be doing it yourself.
The problem is the owning class what’s to cut out human elements so bad they keep letting tools run wild.
Then what even is the point of all this? At my old job the idiot intern was sorting patch cables in a box
The point of what? The push for AI in industry?
You’d have to ask someone else. I can only make conjectures, but I’d say it has something to do with companies feeling the need to justify to their shareholders that their investments in AI were worth it, so they double down on the sunk cost fallacy. Or maybe those shareholders also own stock in big-name AI companies. It’s hard to say exactly…
Doesn’t anyone restrict their AIs rights? An AI should not be allowed to delete the backup. Only someone with admin rights should be able to do that. Normal users, developers and AIs of course should not have the right to touch the backup. Do these people run AI agents as root?
Managing access control is too much work. Better to just let the AI do it.
The backup was on the same volume as the original data.
The AI deleted the whole volume/backup. 😕
No, admins neither should have access.
Backups ahould (best case) be immutable.And off-site…and physical…
Yes

















