• weew@lemmy.ca
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    1 year ago

    given how much AI has advanced in the past year alone, saying it will “always” be easy to spot is extremely short sighted.

    • Kara@kbin.social
      link
      fedilink
      arrow-up
      17
      ·
      1 year ago

      People seem to grasp onto weaknesses AI has now and say that they will have them forever, like how text AI lies, and image generation AI can’t draw hands.

      But these AIs are advancing unimaginably quick, 2 years ago generated text was pretty bad, becoming pretty incoherent, and 1 year ago generated images were mostly strange mush.

      • aebrer@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Spot on! Actually people still talk about hands but it’s already been solved with many newer image gen models… The hands they produce look perfectly fine usually these days.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 year ago

      Some things are inherent in the way the current LLM’s work. It doesn’t reason, it doesn’t understand, it just predicts the next word out of likely candidates based on the previous words. It can’t look ahead to know if it’s got an answer, and it can’t backtrack to change previous words if it later finds out it’s written itself into a corner. It won’t even know it’s written itself into a corner, it will just continue predicting in the pattern it’s seen, even if it makes little or no sense for a human.

      It just mimics the source data it’s been trained on, following the patterns it’s learned there. At no point does it have any sort of understanding of what it’s saying. In some ways it’s similar to this, where a man learned how enough french words were written to win the national scrabble competition, without any clue what the words actually mean.

      And until we get a new approach to LLM’s, we can only improve it by adding more training data and more layers allowing it to pick out more subtle patterns in larger amounts of data. But with the current approach, you can’t guarantee that what it writes will be correct, or even make sense.