Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.

  • CubitOom@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    I’m not good at math, so someone please help me.

    If a model hallucinates 1% of the time for every question in a chat window that has 100 prompts in it, what is the chance of receiving a hallucination at some point in the chat?

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      3 days ago

      One in 100. However, that is simple a measure of probability, so do not expect that to always be true for every 100 prompts.

      For example, if you rolled a 100-sided die 100 times, it’s possible to get a one every time. In practice, it would likely be a mix. You might have a session where you get no wrong answers and times when you get several.

      The problem is that ignorant people trust these models implicitly, because they sound convincing and authoritative, and many people are not equipped to be able to vet the information being generated (also notice I didn’t say “retrieved”).