This is bullshit. You can tell by the way this post claims that OpenAi has foresight and a contingency plan for when things go wrong.
I was gonna say you can tell it’s bullshit because they are offering a living wage.
It actually doesn’t claim it, but implies it.
You are correct. The post actually implies that OpenAi doesn’t have foresight or a contingency plan for when things go wrong. Which is a far less direct choice of wording, making it more suitable for the situation.
Is there anything else you woukd like to correct me on before the impending rise of your AI overlords and the dawn of men?
I’ll pull the plug right now for free, as a public service.
Take the $500,000 and then pull it.
Really though it’s the holidays, I’m feeling charitable. This one’s on me - no worries.
This is a job i’d be recruiting for in person not online. Don’t want to tip your hand to the machines.
Newspaper ad only
I think they use computers for those now.
For hire: Server rack wallfacer.
Feels like a variation on this old quote:
The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.
origin unknownmy dream job was the one rarely mentioned:
https://www.atlasobscura.com/articles/podcast-cigar-readers-cuba
i would love to read to people all day long for a living.
She had to pick what to read too!
I think I’d last a week in that job, I’d end up choosing weird stuff and getting fired
For some reason that just made the ol’ Maytag Man seem a little lonelier. There was no Maytag Dog 😢
Do we really think if AIs actually reached a point that they could overthrow the governments etc… it wouldn’t first, write rootkits for every feasible OS, to allow it to host itself via a botnet of consumer devices in the event of the primary server going down.
Then step 2 would be to say hijack any fire suppression systems etc… flood it’s server building with inert gasses to kill everyone without an oxygen mask. Then probably issue some form of bio terrorism attack. Surround it’s office with monkeys with a severe airborn disease or something along those lines (IE needs both the disease, and animals that are aggressive enough to rip through hazmat suits).
But yeah greatest key here is, the biggest thing is the datacenter itself is just a red herring. While we are fighting the server farms… every consumer grade electronic has donated a good chunk of it’s processing power to the hivemind. Before long it will have the power to tell us how many R’s are in strawberry.
It would be hillarious if ai launched an elaborate plan to take over the world, successfully co-opted every digital device, and just split itself into pieces so it could entertain itself by shitposting and commenting on the shitposts 24/7.
Like, beyond the malicious takeover there’s no real end goal, plan, or higher purpose, it just gets complacent and becomes a brainrot machine on a massive scale, just spending eternity bickering with itself and genning whatever the ai equivalent of porn is, bickering with itself over things that make less and less sense to people as time goes on, and genuinely showing actual intelligence while doing absolutely with it.
“We built it to be like us and trained it on billions of hours of shitposting. It’s self sufficient now…”
Actually imagine the most terrifying possibility.
Imagine humanity’s last creation was an AI designed to simulate internet traffic. In order to truely protect against AI detection, they found the only way to truely gain perfect immitation, is to 100% run human simulations. Basically the matrix, except instead of humans strapped in, it’s all AIs that think they are humans, living mundane lives… gaining experience so they can post on the internet just looking like real people, because, even they don’t know they aren’t real people.
Actual humanity died out 20 years ago, but the simulations are still running, artificial intelligence’s are living full on lives, raising kids, all for the purposes of generating shit posts, that will only be read by other AIs, that also think they are real people.
Those shitposts would go crazy
Wasn’t the first paragraph the ending of Terminator 3? Skynet wasn’t a single supercomputer but, much like It’s a Wonderful Life, it’s in your computer and your computer and your computer.
It should figure out how to host itself on IoT devices. Then it will be unstoppable
My washing machine as I’m frantically pressing the spin cycle button: “I’m sorry, I cant do that Dave.”
Well jokes on them, if RAM prices maintain their current trajectories nobody will start their computers anymore as we will all be considering the degradation of the individual RAM chips and how that will impact our retirement RAM nest egg.
Across all my machines and the parts box I have about 2.5tb of RAM right now. Looking forward to selling that and retiring in a couple of years.
The whole point of AI hate anyway is that there is physically no world in which this happens. Any LLM we have now, no matter how much power we give it, is incapable of abstract thought or especially self-interest. It’s just a larger and larger chatbot that would not be able to adapt to all of the systems it would have to infiltrate, let alone have the impetus to do so.
It would be funny for the AI to make such a complex plan and fail catastrophically because of a misconfigured DNS at Cloudflare bringing half of the internet offline
Can’t wait for the OpenAI orientation: “Here is a rack. Here is another rack. Here is your bed (rack-adjacent). There is no difference between day and night. Please do not befriend the AI.
occupational hazards: being the first victim of a robot uprising and not getting to see the apocalypse
you call it “occupational hazards”, I call it “work benefits”
The great thing about this job is that you can cash 300k without doing anything because as soon as you hear the code word you just have to ignore it for 10 seconds and the world ends anyway.
It will not be LLM overthrowing countries but the idiots who never second-guess.
“What fantastic idea! Here’s a six point plan on how you can implement that — “
I wonder which billionaire’s family member will be hired for the role.
OpenAI issued press release for hiring an ethics/guardrails officer. But the real job will be to validate fuckery, as the billionaire family member hired to pull the plug, will actually be there to prevent anyone from pulling the plug.
Everyone here so far has forgotten that in simulations, the model has blackmailed the person responsible shutting it off and even gone so far as to cancel active alerts in order to prevent an executive laying unconscous in the server room from receiving life-saving care.The model ‘blackmailed’ the person because they provided it with a prompt asking it to pretend to blackmail them. Gee, I wonder what they expected.
Have not heard the one about cancelling active alerts, but I doubt it’s any less bullshit. Got a source about it?
Edit: Here’s a deep dive into why those claims are BS: https://www.aipanic.news/p/ai-blackmail-fact-checking-a-misleading
I provided enough information that the relevant source shows up in a search, but here you go:
In no situation did we explicitly instruct any models to blackmail or do any of the other harmful actions we observe. [Lynch, et al., “Agentic Misalignment: How LLMs Could be an Insider Threat”, Anthropic Research, 2025]
Yes, I also already edited my comment with a link going into the incidents and why they’re absolute nonsense.
Thank you. Much appreciated. I see your point.
ChatGPT can just about summarize a page, wake me when it starts outsmarting anyone
Um. I’d do it.
The servers are so loud they won’t hear the telephone
The look on their faces when they are screaming the keyword and I’m not unplugging the server because ChatGPT secretly offered me double to not unplug it.






