• tyler@programming.dev
    link
    fedilink
    arrow-up
    5
    ·
    4 months ago

    LLMs have been shown to have emergent math capabilities (that are the opposite of what is trained) so you’re simplifying way too much. Yes a lot is just “predictive text” but there’s a ton of “this was not the training and we don’t know how it knows this” as well.