• gaylord_fartmaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    10
    ·
    8 months ago

    Machine learning could find those strengths and weaknesses and learn to work around them likely better than a human could. It’s just trial and error. There’s nothing about the human brain that makes it better suited to understanding the inner logic of an LLM.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      edit-2
      8 months ago

      Congrats. You don’t understand the difference between a statistical model and a human.

      I expected more from a gaylord fartmaster. 2/10.

      • gaylord_fartmaster@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        8 months ago

        In what way?

        Why couldn’t even a basic reinforcement learning model be used to brute force “figure out what input gives desired X output”?

        • WhatAmLemmy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Because the training data is man made so will never be 100% accurate, and critical thought is required to set the desired output, and understand if the output makes sense?

          Statistical models find patterns in one’s and zeros. They don’t apply critical thought.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Actually most (I think all, but not 99% positive) machine learning models are incapable of doing straight arithmetic. Due to the way they are built ML models, including deep learning models, can only learn relationships in a limited input space.

      This is most apparent when you test LLMs on different arithmetic operations:

      • For addition, it does okay up until you get to millions or billions
      • Multiplication I think breaks at the 100/1000 level
      • exponents almost break immediately
      • Give it decimal values and it also breaks relatively quickly for any operation.

      This has to do with the fact that LLMs are effectively multiple layers of linear functions, so higher order operations break down faster.