IBM researchers said a ChatGPT-generated phishing email was almost as effective in fooling people compared to a man-made version.

  • MysticKetchup@lemmy.world
    link
    fedilink
    English
    arrow-up
    123
    arrow-down
    1
    ·
    1 year ago

    IBM researchers said a ChatGPT-generated phishing email was almost as effective in fooling people compared to a man-made version.

    So it’s less effective than a regular phishing email?

    • snooggums@kbin.social
      link
      fedilink
      arrow-up
      39
      ·
      1 year ago

      Yes, but being about the same means ChatGPT could be used to create massive amounts or personalized phishing emails at a low cost in a very short time by automation. Basically doing what they do now, but even faster.

    • FoundTheVegan@kbin.social
      link
      fedilink
      arrow-up
      25
      ·
      1 year ago

      And crafting a carefully targeted phishing email took a human team around 16 hours, they wrote, while ChatGPT took just minutes

      This is significant because any person with the desire to scam can use ChatGPT from the comfort of their own home over lunch instead of hiring professionals for a few days.

      • dack@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        No, it’s significant because attackers can pump out way more emails while also making them customized to their targets and constantly changing to help avoid detectors.

  • Moobythegoldensock@lemm.ee
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    4
    ·
    edit-2
    1 year ago

    And crafting a carefully targeted phishing email took a human team around 16 hours

    Ummm what? Back in college, I used to budget 30-45 minutes a page for essays. What the hell are they writing that took a team of people 16 fucking hours for a few paragraphs of text?

    • Lichtblitz@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I guess they mean person hours since they are referring to a team. An initial brainstorming session, another review session or two and 16 hours are quickly gone.

    • cybersandwich@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      What the hell are they writing that took a team of people 16 fucking hours for a few paragraphs of text?

      An invoice full of billable hours.

  • Bogasse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    To be honest, phishing emails are so bad that I don’t see how any generational AI couldn’t be better. Just making less than two typos per sentence would e enough.

    Someone explained me that it may be intentional that phishing emails are so bad as it acts as a pre-filter, then you only spend time and ressources dealing with presumably very gullible people.

    • Artyom@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      The typos are intentional. They filter out intelligent recipients who wouldn’t fall for the scam.

      • hedgehog@ttrpg.network
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        The typos have been theorized to be intentional (for that reason), but that isn’t the only theory, and afaik those theories aren’t based off conversations with the people crafting those emails.

        It’s also been theorized that phishing emails frequently have typos (intentionally) to lower people’s resistance to well-crafted phishing emails, particular spear phishing.

        There’s also the fact that many phishing emails are crafted by people for whom English is not their first language, and even given that, phishing emails are still better written than spam emails, so it’s quite likely that in many cases it isn’t intentional at all.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    This is the best summary I could come up with:


    (tldr: 2 sentences skipped)

    Case in point, IBM researchers posted an internal study that details how they unleashed a ChatGPT-generated phishing email on a real healthcare company to see if it could fool people as effectively as a human-penned one.

    (tldr: 2 sentences skipped)

    “Humans may have narrowly won this match, but AI is constantly improving,” said IBM hacker Stephanie Carruthers wrote of the work.

    “As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day.”

    Given these results and AI chatbots rapidly improving, what can individuals do against this inbox onslaught?

    IBM’s suggestions ranged from common sense, like calling the purported sender if something looks suspicious, to anemic, like looking out for “longer emails,” which they said are “often a hallmark of AI-generated text.”

    The bottom line, though, is just to use your common sense — and to prepare yourself for an internet that looks set to be rapidly overrun with AI-generated content, malicious or otherwise.


    The original article contains 250 words, the summary contains 163 words. Saved 35%. I’m a bot and I’m open source!

  • Rhoeri@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 year ago

    The simple fact that people still fall for phishing scams is a great indicator that we’ve always been going nowhere.

    • RGB3x3@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Phishing scams are getting really good these days. It’s no longer the Nigerian prince-type obvious scams.

      They make emails nearly identical to real ones, they’re able to fake sender names, they actually use real English.

      If you think you wouldn’t fall for a phishing email, you’re kidding yourself. All it takes is one lapse of judgement while you’re too busy to realize an email is fake.

    • afraid_of_zombies@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Oh please you can’t be 100% mistrustful all the time. Eventually you are going to slip up and assume good faith. This is why it is important to stop people from doing it instead of blaming victims.

      Also, who knows how many people who do fall for these things are mentally disabled.