• AFK BRB Chocolate@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    It shouldn’t be surprising that text that’s repeatedly fed into the LLMs for training is flagged as similar to what comes out of them. The constitution, the bible, and other works commonly spread across the internet are going to likewise be flagged.

  • Hamartiogonic@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    If you’re serious about using a particular method to detect something, and you’re worried about false positives/negatives, you have to dig a bit deeper into the statistics. Just like with the covid tests, you need to be aware of the sensitivity and specificity of the method instead of just calling it unreliable.

  • zkfcfbzr@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    Interesting article that goes into some specific detail on what things AI detectors look out for.

    Interestingly, after reading the article I was able to get ChatGPT to write an essay that both GPTZero and ChatGPT classified as human-written just by asking it to write with “very high perplexity” (and then with “more perplexity” after the first one failed to pass the test).