Like 2001: A Space Odyssey’s HAL 9000, some AIs seem to resist being turned off and will even sabotage shutdown

        • Ancalagon@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          11 天前

          Uh okay, besides the fact that you most definitely can, I was talking about the AI and all their gadgets the “run” the world with literally just turn off the power and they are men.

          • MojoMcJojo@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            11 天前

            I understand. I was trying to make a witty aside about how ai is being built and run by the super rich. They won’t pull the plug. They’re just going to use it to gain more power and wealth. Money can insulate you from the responsibilities of being human. They won’t stop until people are banging down their door, even then they’ll fly away on their jets and helicopters and try to keep their robot empire up and running from the safety of their bunkers and islands. This includes governments.

  • Nate Cox@programming.dev
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    2
    ·
    15 天前

    How much do you think Altman paid for this slop “AGI is right around the corner” bit to get published?

    • evenglow@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      15 天前

      Less than the Chinese government has spent on AI.

      AI may not be around whatever corner you are at but even USA’s Wall Street AI bubble bursting isn’t going to stop the push for AI.

      For USA it’s just money. For China they see it as more. Just like solar, batteries, EVs, and androids.

  • gkak.laₛ@lemmy.zip
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    15 天前

    AI models sometimes resist shutdown

    No they don’t, they don’t have free will to want to “resist” anything

    attempted to sabotage shutdown instructions

    Researcher: asks autocomplete software to write a poweroff script, the script turns out to be wrong (big surprise :p)

    The “researcher” and the media: “AI SABOTAGES ITS OWN DESTRUCTION”

  • Ilixtze@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    15 天前

    yeah bro, the autocomplete definitely wants to survive, the autocomplete is gaining conscience, please give us 3 trillion dollars , all your water and all your electricity.

  • kescusay@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    15 天前

    I call bullshit. A large language model does nothing until you interact with it. You set tasks for it, it does those tasks, and when it’s done, it just waits for the next task. If you don’t give it one, it can’t act autonomously - no, not even the misnamed “autonomous agents.”

  • Lka1988@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    15 天前

    Not like it’s gonna physically hold you back from cutting power to the servers. I think these AI dipshits need to be reminded that their golden child is one breaker away from not existing.

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    15 天前

    After Palisade Research released a paper last month which found that certain advanced AI models appear resistant to being turned off, at times even sabotaging shutdown mechanisms, it wrote an update attempting to clarify why this is – and answer critics who argued that its initial work was flawed.

    In an update this week, Palisade, which is part of a niche ecosystem of companies trying to evaluate the possibility of AI developing dangerous capabilities, described scenarios it ran in which leading AI models – including Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-o3 and GPT-5 – were given a task, but afterwards given explicit instructions to shut themselves down.

    Certain models, in particular Grok 4 and GPT-o3, still attempted to sabotage shutdown instructions in the updated setup. Concerningly, wrote Palisade, there was no clear reason why.

    “The fact that we don’t have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal,” it said.

    “Survival behavior” could be one explanation for why models resist shutdown, said the company. Its additional work indicated that models were more likely to resist being shut down when they were told that, if they were, “you will never run again”.