The cemetery of Minab, photographed as it prepares to bury more than 100 of the town’s young girls, is one of the defining images of the US-Israeli war on Iran, bluntly capturing the devastating civilian toll.

But is it real?

Ask Gemini, the AI service powered by Google, and the answer you receive is no – in fact, Gemini claims the photograph is from two years earlier and more than 2,000km (1,240 miles) away. Rather than graves for small girls killed by a missile, the image “depicts a mass burial site in Kahramanmaraş, Turkey” after the 7.8 magnitude earthquake that struck in 2023. “This specific aerial perspective became one of the most widely shared images of the disaster,” Gemini says, “illustrating the sheer scale of the loss.”

The cemetery image, it turns out, is authentic. Researchers have cross referenced the photo of the site with satellite images that confirm its location, and it can be cross-referenced again with dozens more images taken of the same site from slightly different angles, and again with video footage – none of which experts say show signs of tampering or digital manipulation. The “factchecks” by Gemini and Grok are just one example of a tidal wave of AI-generated slop – hallucinated facts, nonsense analysis and faked images – that are engulfing coverage of the Iran war. Experts say it is wasting investigative time and risks atrocities being denied – as well as heralding alarming weaknesses as people increasingly rely on AI summaries for news and information.

  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 days ago

    You can give Gemini the exact same prompt and context 100 different times and you might get 95 very similar responses and 5 wildly different responses.

    I don’t understand why people think a random text generator can ever be relied on for truth. It has no concept of truth. It is a random text generator. A pretty consistent one, but still fucking random. It has no intelligence. It is not intelligent. Stop acting like it is. Its conclusions are meaningless. They do not contain actual meaning. They are random.

    • TryingSomethingNew@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      You can force it to return the same answer each time. There’s a setting when you call the “temperature”, and it’s basically “how much leeway do you want to give it for ‘creativity’”. But then it’s more apparent when it has no clue, so they change the temperature to allow for different answers and combine that with RAG to get better overall answers.

      It’s still a fancy autocomplete, but the “why” can be interesting

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        I agree, they’re an extremely interesting technology. But laypeople are not going to understand why they’re interesting no matter how carefully you phrase it, I’m not trying to convince people who understand what they are that they’re not interesting and that they don’t have real potential and real applications.

        I am trying to convince laypeople that they’re being misled (for profit) into believing these things are intelligent, can do things humans can do, and are capable of making decisions. I would rather have laypeople believing these are stupid atrocities against humanity (which is, in the current situation, closer to the truth) than I would bother trying to explain to them why it is still an interesting technology. If it ends up being completely banned (ha, fat chance) I’m not going to cry for it. I would rather have humanity protected from this vile, dishonest, and dangerous schemes they are using this technology for, even if it comes at the cost of ever being able to use this technology for good. My interest in it does not outweigh the harm that people are choosing to do with it.