• panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 day ago

    Fabricated 4,000 fake user profiles to cover up the deletion

    This has got to be a reinforcement learning issue, I had this happen the other day.

    I asked Claude to fix some tests, so it fixed the tests by commenting out the failures. I guess that’s a way of fixing them that nobody would ever ask for.

    Absolutely moronic. These tools do this regularly. It’s how they pass benchmarks.

    Also you can’t ask them why they did something, they have no capacity of introspection, they can’t read their input tokens, they just make up something that sounds plausible for “what were you thinking”.

    • FishFace@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      The model we have at work tries to work around this by including some checks. I assume they get farmed out to specialised models and receive the output of the first stage as input.

      Maybe it catches some stuff? It’s better than pretend reasoning but it’s very verbose so the stuff that I’ve experimented with - which should be simple and quick - ends up being more time consuming than it should be.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 hours ago

        I’ve been thinking of having a small model like a long context qwen 4b run and do quick code review to check for these issues, then just correct the main model.

        It feels like a secondary model that only exists to validate that a task was actually completed could work.

        • FishFace@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 hours ago

          Yeah, it can work, because it’ll trigger the recall of different types of input data. But it’s not magic and if you have a 25% chance of the model you’re using hallucinating, you probably end up still with an 8.5% chance of getting bullshit after doing this.