• cholesterol@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    1 day ago

    you can’t trust its explanations as to what it has just done.

    I might have had a lucky guess, but this was basically my assumption. You can’t ask LLMs how they work and get an answer coming from an internal understanding of themselves, because they have no ‘internal’ experience.

    Unless you make a scanner like the one in the study, non-verbal processing is as much of a black box to their ‘output voice’ as it is to us.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      23 hours ago

      Anyone that used them for even a limited amount of time will tell you that the thing can give you a correct, detailed explanation on how to do a thing, and provide a broken result. And vice versa. Looking into it by asking more have zero chance of being useful.