• Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    22
    ·
    2 months ago

    Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.

    • RememberTheApollo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      2 months ago

      If someone says they got a second opinion from a physician known for being wrong half the time would you not wonder why they didn’t choose someone more reliable for something as important as their health? AI is notorious for providing incomplete, irrelevant, heavily slanted, or just plain wrong info. Why give it any level of trust to make national decisions? Might as well, I dunno…use a bible? Some would consider that trustworthy.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        2 months ago

        I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.

        The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.