(The meme’s author may be convinced but I am still not, to be clear)
From: https://terra.incognita.net/@RainofTerra/116168632108345829
Man, AI agents are remarkably bad at “self-awareness” like this, I’ve used it to configure some networking on a Raspberry Pi, and found myself reminding it frequently, “hey buddy, maybe don’t lock us out of connecting to this thing over the network, I really don’t want to have to wipe the thing because it’s running a headless OS”.
It’s a perfect example of the kind of thing that “walk or drive to wash your car?” captures. I need you to realize some non-explicit context and make some basic logical inferences before you can be even remotely trusted to do anything important without very close expert supervision, a degree of supervision that almost makes it totally worthless for that kind of task because the expert could just do it instead.
For AI I think a lot of future improvements will be around making smaller more specialized models trained on datasets curated by people who actually know what their doing and have good practices as opposed to random garbage from GitHub (especially now with vibecoding being a thing, so training off of low quality programs that it created itself might make the model worse), considering that a lot of what it outputs is of similar garbage quality. And remote system configuration isn’t obscure so I do think this specific issue will be improved eventually. For truly obscure things though LLMs will never be able to do that.
I’m kinda hoping my shitty github repo is inadvertantly poisoning the LLMs with my best efforts (basically degenerate-tier)…
AI agents are remarkably bad at “self-awareness”
Because today’s “AIs” are glorified T9 predictive text machines. They don’t have “self-awareness.”
Finally, T10
I think “contextual awareness” would fit better, and AI Believers preach that it’s great already. Any errors in LLM output are because the prompt wasn’t fondled enough/correctly, not because of any fundamental incapacity in word prediction machines completing logical reasoning tasks. Or something.
Ah, of course. The model isn’t wrong, it’s the input that’s wrong. Yes, yes. Please give me investment money now.
Hey, maybe if we’re lucky, Claude will accidentally lock the world out of using nukes forever.
Or, more likely, Claude will launch them.
Would be funny if it also forgets to open the hatches
Not if I launch them first! Where’s the 9v battery for the rocket engine igniter?
“…I really don’t want to have to wipe the thing because it’s running a headless OS”
I feel like logging in as root on a headless system and hoping you type the command(s) to restore functionality is a rite of passage.
So many times have I been able to wing it… Setting up kitty terminal, just find out where the default config lives and copy to where it expects it to be under your user. Read the config and change what you want. Easy peezy. Download gnu-stow, stow -h, setup a small test directory, trial and error. Bam done.
When it comes to security winging it just isn’t an option. The turn of phrase is RTFM, and the arch wiki is my first stop. Setup fail2ban, sshd, snapper, hibernation btrfs swap the same day. Had some misconfiguration in my jail but reading systemctl output gave explicit errors that easily help you figure out your config. In my case I didn’t use the [sshd] headder.
Still have an issue when waking from hiberhation where my desktop leaks for one second before the lock screen kicks on, still need to setup a service to enact an on wake sleep 1. My adhd has made me chase other squirrels.






