Argentina’s security forces have announced plans to use artificial intelligence to “predict future crimes” in a move experts have warned could threaten citizens’ rights.

The country’s far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use “machine-learning algorithms to analyse historical crime data to predict future crimes”. It is also expected to deploy facial recognition software to identify “wanted persons”, patrol social media, and analyse real-time security camera footage to detect suspicious activities.

While the ministry of security has said the new unit will help to “detect potential threats, identify movements of criminal groups or anticipate disturbances”, the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations.

  • Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    165
    arrow-down
    1
    ·
    3 months ago

    Tech guy here.

    This is a tech-flavored smokescreen to avoid responsibility for misapplied law enforcement.

    • Johnmannesca@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 months ago

      By innate definition, everyone has the potential for criminality, especially those applying and enforcing the law; as a matter of fact, not even the ai is above the law unless that’s somehow changing. We need a lot of things on Earth first, like an IoT consortium for example, but an ai bill of rights in the US or EU should hopefully set a precedent for the rest of the world.

      • Deestan@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        3 months ago

        The AI is a pile of applied stastistic models. The humans in charge of training it, testing it and acting on its input have full control and responsibility for anything that comes out of it. Personifying or otherwise separating an AI system from being the will of its controllers is dangerous as it erodes responsibility.

        Racist cops have used “I go where the crime is” as an exuse to basically hunt minorities for sport. Do not allow them to say “the AI model said this was efficient” and pretend it is not their own full and knowing bias directing them.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    38
    ·
    3 months ago

    Would you believe it, all those political enemies and protesters turned out to be future criminals?

    How fortunate we developed this system!

  • SlopppyEngineer@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    3 months ago

    That’s already tried. In the end the AI is just an electronic version of existing police biases.

    Police files more reports and arrests in poor neighborhoods because they patrol more there. Reports get used as training data and AI predicts more crime in poor areas. Those areas now get over patrolled and the tension leads to more crime. The system is celebrated for being correct.

    • Tryptaminev@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      3 months ago

      You make it sound like a bug instead of a feature. But for the capitalist ruling class it is working exactly as intended.

    • ours@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      And that one could actually see the future and not just go on calculate biased statistics.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      It’s like, in “Minority Report”, some of these crimes weren’t even premediated crimes, for example the crime they stop at the beginning. The guy was about to stab his wife because he found out she’s been cheating on him. Chances are if given time to process his feelings, he wouldn’t have done it.

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    3 months ago

    Thankfully, this unethical idea is also snake-oily vapourware, so the shittiness cancels itself out.

  • Mothra@mander.xyz
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    This sounds too surveillancey for the so self proclaimed libertarian and too much of a flamboyant economic investment for the guy that said to cut down all unnecessary costs

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    3 months ago

    Part of the problem with this approach is that prediction engines are predicted on the idea that there’s more of a thing to predict.

    So unless they really, really go out of their way with modeling the records to account for this, they’ll have a system very strongly biased towards predicting more criminal behavior for everyone fed into it.