I hear people saying things like “chatgpt is basically just a fancy predictive text”. I’m certainly not in the “it’s sentient!” camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

Even if it’s predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that’s pretty interesting. Tbh, I’m more impressed by chatgpt’s ability to appearing to “understand” my prompts than I am by the quality of the output. Even though it’s writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I’ve asked / said.

Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?

  • bionicjoey@lemmy.ca
    link
    fedilink
    arrow-up
    66
    arrow-down
    3
    ·
    edit-2
    8 months ago

    It is literally the same exact kind of algorithm that predicts the next word you will type on your phone based on what’s already been typed. The differences are that it has a much larger training dataset, which means more accurate predictions, it processes based on the entire body of text that has already been given (including the hidden prompt and previous messages), and that it doesn’t always predict whole words, but instead clusters of characters.

    If you want a more general overview of how machine learning works in general, this is a good video series to watch: https://www.youtube.com/watch?v=aircAruvnKk

    If you want to see some evidence that it doesn’t truly understand what it says, try having it generate and explain some jokes or riddles that rely on wordplay. It will completely shatter the illusion.

    • QuaternionsRock@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      3
      ·
      edit-2
      8 months ago

      Me:

      You are a talented comedian. Generate some jokes or riddles that rely on wordplay.

      ChatGPT 4:

      Sure, here are a few wordplay-based jokes and riddles:

      Why don’t scientists trust atoms? Because they make up everything!

      I told my wife she should embrace her mistakes. She gave me a hug.

      What do you call fake spaghetti? An “impasta.”

      I’m reading a book on anti-gravity. It’s impossible to put down!

      Why was the math book sad? Because it had too many problems.

      Enjoy these light-hearted plays on words!

      Edit: I prodded it a little, and I actually quite like the fourth one below.

      Me:

      These aren’t very clever. Can you generate some that a more mature audience would appreciate?

      ChatGPT:

      Of course, here are some more sophisticated wordplay-based jokes suitable for a mature audience:

      Parallel lines have so much in common. It’s a shame they’ll never meet.

      A Freudian slip is when you say one thing but mean your mother. I mean another.

      I’m reading a book on the history of glue. Can’t put it down, but I’m sticking to it.

      Time flies like an arrow; fruit flies like a banana. A classic example of linguistic ambiguity.

      The rotation of Earth really makes my day. Quite literally.

      These jokes involve a bit more linguistic and conceptual play, appealing to an audience that enjoys a deeper level of wordplay.

      • bionicjoey@lemmy.ca
        link
        fedilink
        arrow-up
        17
        arrow-down
        1
        ·
        8 months ago

        All of those jokes are plagiarized. It doesn’t actually understand the jokes, it’s just repeating ones that it’s seen before. Ask it to explain why some of these are funny.

        • shastaxc@lemm.ee
          link
          fedilink
          arrow-up
          6
          ·
          8 months ago

          Seriously. I’ve literally heard all of those jokes before. It may have even stolen them all from a single website.

      • guyrocket@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        8 months ago

        I agree, that was good.

        My major takeaway is that neutral networks, and AI in general, are mostly pattern recognition with a little bias and weighting thrown in to improve accuracy.

        And that is why I question all the supposedly amazing things people seem to think it will do and many of the applications of AI.

        • bionicjoey@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          That’s exactly right. It is a statistical model that is based on some training dataset. The quality of the predictions is only as good as the completeness and bias of the training set.

          • dustyData@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            8 months ago

            And it is one of the major issues with giving AI and the corporations who make them free reign to “think” and inform decision making. Feed it a racist dataset, and the AI will be racist. Feed it misinformation, and the AI will only reproduce misinformation.

            • snooggums@kbin.social
              link
              fedilink
              arrow-up
              3
              arrow-down
              2
              ·
              8 months ago

              The proof that AI is just garbage in and garbage out is that AI always does this while some people are able to be anti-racist and anti-misinformation as a response even if most people fall for it.

              Feed it a racist dataset, and the AI will be racist. Feed it misinformation, and the AI will only reproduce misinformation.

        • Acamon@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          That’s my take as well, I would just like to know more about the weighting/bias.

          • bionicjoey@lemmy.ca
            link
            fedilink
            arrow-up
            6
            ·
            8 months ago

            Weighting and bias are based on the training dataset. And the training dataset of ChatGPT is mostly internet content, literature, social media discussions, articles, etc.

            So the inherent biases are going to be limited in the same way. For example, ChatGPT is not good at generating or interpreting code written in Malbolge, despite the fact that this language is meant to be relatively easy to understand for a machine yet difficult for a human to understand. Because it isn’t processing like a machine, it is processing text like a person.

            It also is bad at understanding wordplay like puns since wordplay requires a simultaneous understanding of the meaning of a word as well as the linguistics that underly that word. It is decent at generating puns which already exist and are out in the world, but it can’t creatively generate new ones or interpret novel puns or other wordplay, since that would require a deeper understanding of the language.

            Looking at the things it is bad at can give a great insight into its limitations, and in turn into how it works.

  • DarkGamer@kbin.social
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    8 months ago

    I found this example useful, this dude builds a simple LLM that writes infinite Shakespeare and walks you through each step.

    Eli5: large language models like chat GPT are really good at guessing what combinations of words are most likely to be a good response to questions. They’re so good at it that many people think they are intelligent even though they’re not.

  • As I understand it, the big innovation that allowed things to advance so fast is called “attention”.

    Basically it’s not just learning which word is next, it’s also learning which of the previous words is the most important context for predicting the next word. This is what allows it to learn the grammatical structure of language which is important for a couple reasons.

    For one it can use that info to better identify if data in the training set is related to the subject at hand. E.g. it can better pick out keywords so it knows that 2 data sets with the word “the” might not be related at all, but 2 data sets with the word “silicon” are probably highly related.

    It might even be able to “understand meaning” by finding the relationships of words to each other. If it comes across the phrase “minor the key and not the child”, it can figure out not only that “key” is important to the subject at hand, but also that information about children is actively harmful to predicting the next word.

    It then uses this information to stay on topic, avoid mixing nonsense sentence structures together, and increasingly-- predict what an expert in the field might say.

    There are more tricks to it than this, including ones that don’t know about or understand, but I’ve heard that the recent advancement is largely due to “attention”

    • dumbcrumb@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 months ago

      Because when you ask it to solve a math problem it isn’t actually solving the problem like you and I would. It’s taking your input and comparing it to the data that it has previously been trained on to give the most likely response. If it was actually properly solving the problem, then it wouldn’t mess up on simple stuff. You can see this if you just give it a math problem that isn’t commonly completed, like a few multiplications of large numbers in a row. It will just spit out some random large number that isn’t even close.