• SparroHawc@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 hours ago

    That’s because it doesn’t really ‘know’ things in the same way you and I do. It’s much more like having a gut reaction to something and then spitting it out as truth; LLMs don’t really have the capability to ruminate about something. The one pass through their neural network is all they get unless it’s a ‘reasoning’ model that then has multiple passes as it generates an approximation of train-of-thought - but even then, its output is still a series of approximations.

    When its training data had something resembling corrections in it, the most likely text that came afterwards was ‘oh you’re right, let me fix that’ - so that’s what the LLM outputs. That’s all there is to it.