• SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Main difference is that human brains usually try to verify their extrapolations. The good ones anyway. Although some end up in flat earth territory.

    • Prandom_returns@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      12
      ·
      9 months ago

      Yes, my keyboard autofill is just like your brain, but I think it’s a bit “smarter” , as it doesn’t generate bad faith arguments.

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        9 months ago

        Your Markov chain based keyboard prediction is a few tens of billions of parameters behind state of the art LLMs, but pop off queen…

        • Prandom_returns@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          9 months ago

          Thanks for the unprompted mansplanation bro, but I was specifically refering to the comment that replied “JuSt lIkE hUmAn BrAin”, to “they generate data based on other data”

          • NιƙƙιDιɱҽʂ@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            9 months ago

            That’s crazy, because they weren’t even talking about keyboard autofill, so why’d you even bring that up? How can you imply my comment is irrelevant when it’s a direct response to your initial irrelevant comment?

            Nice hijacking of the term mansplaining, btw. Super cool of you.

            • Prandom_returns@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              9 months ago

              Oh my god, we’ve got a sealion here.

              Fine, I’ll play along, chew it up for you, since you’ve been so helpful and mansplained that a keyboard is different than LLM:

              My comment was responding to anthropomorphization of software. Someone said it’s not human because it just generates output based on input. Someone else said “just like human brain”, I said yes, but also just like a keyboard, alluding to the false equivalence.

              Clearer?