I am convinced we’re already seeing generative AI used to “verify” military targets for Israel (including now and during their genocide in Palestine) and the US, as well as in ICE with facial recognition for US detainment and target confirmation. If we survive this period in time, any attempt to hold these people accountable for their actions will be met with the excuse they were using generative AI to pick targets and cannot be blamed when the AI makes a mistake. We must not let them use that as an excuse to skirt accountability for the indiscriminate bombings and assaults on civilian targets across the world.

Unfortunately, I cannot see any way out of this so long as we have no government capable, or willing, to reign in the power of these large-data corporations and tech bros. All of this is orchestrated by a group of ultra-wealthy pedophiles pushing technologically-clueless dotards in governments to spearhead the use of generative AI for military and surveillance applications when the technology is misunderstood at best, and ostensibly dangerous at worst. I believe this will only get worse before it gets better if we do not do anything to stop this, and I don’t know what we can do to stop any of this at this point.

  • zebidiah@lemmy.ca
    link
    fedilink
    arrow-up
    14
    ·
    6 hours ago

    No… The modern Nuremberg excuse will be “we all have mortgages to pay…”

    Thanks late stage capitalism!

  • NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    arrow-up
    13
    ·
    6 hours ago

    Considering that we have folk ALL over social media already using “they are just following orders” to excuse what the US military is already doing to our own fucking people…

    But I also suggest reading up a lot more on what the Nuremburg Trials actually were (and weren’t). It is good that we kind of universally acknowledged “I was just following orders” is not an excuse for murder, rape, and genocide (… for about as long as we acknowledged nazis were the universal evil). But the reality is that it was a complete and utter shitshow with very few actual convictions. In large part because… war is a LOT more “air bud rules” than people realize. And most of the ceremonial high profile targets… actually had good lawyers.

    Over dramatized youtube essay, but Jacob Geller has a really good video where he goes into this.

    • Ænima@lemmy.zipOP
      link
      fedilink
      arrow-up
      3
      ·
      6 hours ago

      Because I thought about it while showering after reading about how AI is going to be used in military application, knowing full well it is not even correctly identifying dangerous outcomes with non-military applications. Why do you ask?

      • Chippys_mittens@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        6 hours ago

        Just fairly long winded and intense for scrub a dub time I guess. You do you though, I don’t think its necessarily wrong for the community.

        • Ænima@lemmy.zipOP
          link
          fedilink
          arrow-up
          4
          ·
          6 hours ago

          Unfortunately, it’s the world forced upon all of us and not a hard thing to think about when the last thing you saw, before the shower, was about this incredibly reckless genAI proposal. Trust me, I’d rather think about puppies or kittens.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 hours ago

    AI is the logical conclusion to McKinsey analysts…

    Why pay a “coming soon ceo” to tell you to fuck over workers and raise CEO pay when you can program a computer to do it?

    If it doesn’t tell them what they want to hear, it means it needs “tweaked” if it tells them their pre-existing opinion is right, then it’s not their idea anymore and the AI is to blame when it backfires.

    • zout@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      6 hours ago

      If AI would be the downfall of McKinsey and their peers, more people would be for it.