• minorkeys@lemmy.world
    link
    fedilink
    arrow-up
    156
    arrow-down
    6
    ·
    edit-2
    1 month ago

    The public fundamentally misunderstands this tech because salesman lied to them. An LLM is not AI. It just says the most likely thing based off what is most common in its training data for that scenario. It can’t do math or problem solve. It can only tell you what the most likely answer would be. It can’t do function things. It’s like Family Feud where it says what the most people surveyed said.

    • Clent@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      78
      ·
      1 month ago

      Some of them will “do math” but not with the LLM predictor, they have a math engine and the predictor decides when to use it. What’s great is when it outputs results, it’s not clear if it engaged the math engine or just guessed.

      • hikaru755@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        1 month ago

        when it outputs results, it’s not clear if it engaged the math engine or just guessed

        That depends on the harness though. In the plain model output it will be clear if a tool call happened, and it depends on the application UI around it whether that’s directly shown to the user, or if you only see the LLM’s final response based on it.

    • 1D10@lemmy.world
      link
      fedilink
      arrow-up
      26
      ·
      1 month ago

      I explain it as asking 100 people to Google something and taking the most common answer.

        • 1D10@lemmy.world
          link
          fedilink
          arrow-up
          19
          ·
          1 month ago

          Yep but instead of “name something a woman keeps in her purse” it’s “write my legal document” or “is it ok to lick a lamp socket”

          • felbane@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            1 month ago

            Great question! The answer to all three of your queries is “yes.” Would you like me to search for the nearest lamp socket?

    • SorryQuick@lemmy.ca
      link
      fedilink
      arrow-up
      4
      arrow-down
      4
      ·
      1 month ago

      Is a human much different? We too require tons of training and we too are prone to stupid mistakes.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        Fundamentally yes and no. Original commentor could’ve saved his breath, if people wanted to be educated on AI they have plenty of resources to do so but instead they choose to remain ill informed. The difference is that humans are capable of critical thinking and conceptual connection. We are just as prone to mistakes as AI, we just have a much higher apptitude for mistakes lol. Hence the goal not being to make a perfect AI, its a much more achievable goal of making AI’s that beat us in specific fields. Then to beat us in all fields.

        • SorryQuick@lemmy.ca
          link
          fedilink
          arrow-up
          0
          arrow-down
          2
          ·
          edit-2
          1 month ago

          It’s missing features obviously (think neuroplasticity) but is that how AI differs from human intelligence, or simply a lack in the current generation?

          • Scubus@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            1 month ago

            It seems to be a flaw in both the hardware and software side of things. Hardware wise, we have yet to make chips that achieve the processing density of human brain matter. Also, heat generation becomes an issue as you try to scale smaller systems up. Software wise, we know our current neural networks dont scale up well, so we seem to be waiting on some more foundational research for more efficient algorithms. My suspicion is that we’re not really going to get true General Superintelligence until we start manufacturing chips that incorporate living neurons, it just really seems cheaper to use already existing computing systems than to design your own architecture.