• FishFace@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 天前

    The model we have at work tries to work around this by including some checks. I assume they get farmed out to specialised models and receive the output of the first stage as input.

    Maybe it catches some stuff? It’s better than pretend reasoning but it’s very verbose so the stuff that I’ve experimented with - which should be simple and quick - ends up being more time consuming than it should be.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      23 小时前

      I’ve been thinking of having a small model like a long context qwen 4b run and do quick code review to check for these issues, then just correct the main model.

      It feels like a secondary model that only exists to validate that a task was actually completed could work.

      • FishFace@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 小时前

        Yeah, it can work, because it’ll trigger the recall of different types of input data. But it’s not magic and if you have a 25% chance of the model you’re using hallucinating, you probably end up still with an 8.5% chance of getting bullshit after doing this.