Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale.

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

  • N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    177
    arrow-down
    4
    ·
    16 days ago

    Traditional instruction gave the same result as a bleeding edge ChatGPT tutorial bot. Imagine what would happen if a tiny fraction of the billions spent to develop this technology went into funding improved traditional instruction.

    Better paid teachers, better resources, studies geared at optimizing traditional instruction, etc.

    Move fast and break things was always a stupid goal. Turbocharging it with all this money is killing the tried and true options that actually produce results, while straining the power grid and worsening global warming.

      • elvith@feddit.org
        link
        fedilink
        English
        arrow-up
        40
        arrow-down
        1
        ·
        16 days ago

        It’s the other way round: Education makes for less gullible people and for workers that demand more rights more freely and easily - and then those are coming for their yachts…

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      16 days ago

      Imagine all the money spent on war would be invested into education 🫣what a beautiful world we would live in.

    • otp@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      15
      ·
      16 days ago

      Traditional instruction gave the same result as a bleeding edge ChatGPT tutorial bot.

      Interesting way of looking at it. I disagree with your conclusion about the study, though.

      It seems like the AI tool would be helpful for things like assignments rather than tests. I think it’s intellectually dishonest to ignore the gains in some environments because it doesn’t have gains in others.

      You’re also comparing a young technology to methods that have been adapted over hundreds of thousands of years. Was the first automobile entirely superior to every horse?

      I get that some people just hate AI because it’s AI. For the people interested in nuance, I think this study is interesting. I think other studies will seek to build on it.

      • Kalysta@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        The point of assignments is to help study for your test.

        Homework is forced study. If you’re just handed the answers, you will do shit on the test.

        • otp@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 days ago

          The point of assignments is to help study for your test.

          To me, “assignment” is more of a project. Not rote practice. Applying knowledge to a bit of a longer term, multi-part project.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      15 days ago

      The education system is primarily about controlling bodies and minds. So any actual education is counter-productive.

    • littlewonder@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      8
      ·
      edit-2
      2 days ago

      LLMs/GPT, and other forms of the AI boogeyman, are all just a tool we can use to augment education when it makes sense. Just like the introduction of calculators or the internet, AI isn’t going to be the easy button, nor is it going to steal all teachers’ jobs. These tools need to be studied, trained for, and applied purposely in order to be most effective.

      EDIT: Downvoters, I’d appreciate some engagement on why you disagree.

  • 2ugly2live@lemmy.world
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    1
    ·
    15 days ago

    I don’t even know of this is ChatGPT’s fault. This would be the same outcome if someone just gave them the answers to a study packet. Yes, they’ll have the answers because someone (or something) gave it to them, but won’t know how to get that answer without teaching them. Surprise: For kids to learn, they need to be taught. Shocker.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      15 days ago

      I’ve found chatGPT to be a great learning aid. You just don’t use it to jump straight to the answers, you use it to explore the gaps and edges of what you know or understand. Add context and details, not final answers.

      • IzzyScissor@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        15 days ago

        The study shows that once you remove the LLM though, the benefit disappears. If you rely on an LLM to help break things down or add context and details, you don’t learn those skills on your own.

        I used it to learn some coding, but without using it again, I couldn’t replicate my own code. It’s a struggle, but I don’t think using it as a teaching aid is a good idea yet, maybe ever.

        • jpeps@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          I wouldn’t say this matches my experience. I’ve used LLMs to improve my understanding of a topic I’m already skilled in, and I’m just looking to understand something nuanced. Being able to interrogate on a very specific question that I can appreciate the answer to is really useful and definitely sticks with me beyond the chat.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      2
      ·
      edit-2
      15 days ago

      The only reason we’re trying to somehow compromise and allow or even incorporate cheating software into student education is because the tech-bros and singularity cultists have been hyping this technology like it’s the new, unstoppable force of nature that is going to wash over all things and bring about the new Golden Age of humanity as none of us have to work ever again.

      Meanwhile, 80% of AI startups sink and something like 75% of the “new techs” like AI drive-thru orders and AI phone support go to call centers in India and Philippines. The only thing we seem to have gotten is the absolute rotting destruction of all content on the internet and children growing up thinking it’s normal to consume this watered-down, plagiarized, worthless content.

    • ChickenLadyLovesLife@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      15 days ago

      I took German in high school and cheated by inventing my own runic script. I would draw elaborate fantasy/sci-fi drawings on the covers of my notebooks with the German verb declensions and whatnot written all over monoliths or knight’s armor or dueling spaceships, using my own script instead of regular characters, and then have these notebook sitting on my desk while taking the tests. I got 100% on every test and now the only German I can speak is the bullshit I remember Nightcrawler from the X-Men saying. Unglaublich!

      • blazeknave@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 days ago

        I just wrote really small on a paper in my glasses case, or hidden data in the depths of my TI86.

        We love Nightcrawler in this house.

    • michaelmrose@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 days ago

      Actually if you read the article ChatGPT is horrible at math a modified version where chatGPT was fed the correct answers with the problem didn’t make the kids stupider but it didn’t make them any better either because they mostly just asked it for the answers.

  • Insig@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    15 days ago

    At work we give a 16/17 year old, work experience over the summer. He was using chatgpt and not understanding the code that was outputing.

    I his last week he asked why he doing print statement something like

    print (f"message {thing} ")

      • ulterno@lemmy.kde.social
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        15 days ago

        Students first need to learn to:

        1. Break down the line of code, then
        2. Ask the right questions

        The student in question probably didn’t develop the mental faculties required to think, “Hmm… what the ‘f’?”

        A similar thingy happened to me having to teach a BTech grad with 2 years of prior exp. At first, I found it hard to believe how someone couldn’t ask such questions from themselves, by themselves. I am repeatedly dumbfounded at how someone manages to be so ignorant of something they are typing and recently realising (after interaction with multiple such people) that this is actually the norm[1].


        1. and that I am the weirdo for trying hard and visualising the C++ abstract machine in my mind ↩︎

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        15 days ago

        It all depends on how and what you ask it, plus an element of randomness. Remember that it’s essentially a massive text predictor. The same question asked in different ways can lead it into predicting text based on different conversations it trained on. There’s a ton of people talking about python, some know it well, others not as well. And the LLM can end up giving some kind of hybrid of multiple other answers.

        It doesn’t understand anything, it’s just built a massive network of correlations such that if you type “Python”, it will “want” to “talk” about scripting or snakes (just tried it, it preferred the scripting language, even when I said “snake”, it asked me if I wanted help implementing the snake game in Python 😂).

        So it is very possible for it to give accurate responses sometimes and wildly different responses in other times. Like with the African countries that start with “K” question, I’ve seen reasonable responses and meme ones. It’s even said there are none while also acknowledging Kenya in the same response.

    • copd@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      Im afraid to ask, but whats wrong with that line? In the right context thats fine to do no?

      • Insig@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 days ago

        There is nothing wrong with it. He just didn’t know what it meant after using it for a little over a month.

    • LifeInMultipleChoice@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      30
      ·
      16 days ago

      “tests designed for use by people who don’t use chatgpt is performed by people who don’t”

      This is the same fn calculator argument we had 20 years ago.

      A tool is a tool. It will come in handy, but if it will be there in life, then it’s a dumb test

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        41
        arrow-down
        4
        ·
        15 days ago

        The point of learning isn’t just access to that information later. That basic understanding gets built on all the way up through the end of your education, and is the base to all sorts of real world application.

        There’s no overlap at all between people who can’t pass a test without an LLM and people who understand the material.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            15 days ago

            We learned that calculators hinder learning. Arithmetic is a core competency you can’t do algebra without, let alone higher math.

            I really have no idea why you’re asserting the opposite so confidently. Calculators are not beneficial.

      • FlorianSimon@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        2
        ·
        edit-2
        14 days ago

        This is ridiculous. The world doesn’t have to bend the knee to LLMs, they’re supposed to be useful tools to solve problems.

        And I don’t see why asking them to help with math problems would be unreasonable.

        And even if the formulation of the test was not done the right way, your argument is still invalid. LLMs were being used as an aid. The test wasn’t given to the LLM directly. But students failed to use the tool to their advantage.

        This is yet another hint that the grift doesn’t actually serve people.

        Another thing these bullshit machines can’t do! The list is getting pretty long.

        About the calculator argument… Well, the calculator is still used in class, because it makes sense in certain contexts. But nobody ever sold calculators saying they would teach you math and would be a do-everything machine.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          15 days ago

          Also actual mathematicians are pretty much universally capable of doing many calculations to reasonable precision in their head, because internalizing the relationships between numbers and various mathematical constructs is necessary to be able to reason about them and use them in more than trivial ways.

          Tests for recall aren’t because the specific piece of information is the point. They’re because being able to retrieve the information is essential to integrate it into scenarios where you can utilize it, just like being able to do math without a calculator is needed to actually apply math in ways that aren’t proscribed prescribed for you.

          • ulterno@lemmy.kde.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 days ago

            proscribed

            err… I’m finding it hard to understand the meaning of the sentence using the dictionary meaning of this word. Did you mean to use some other word?

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              15 days ago

              I’d love to tell you how the hell I got there. My brain exploded I guess. I meant prescribed, in the sense that you’re following the exact script someone laid out before you.

              I had a physics class in college where we spent each section working through problems to demonstrate the concepts. You were allowed a page “cheat sheet” to use on the exams, and the exams were pretty much the same problems with the numbers changed. Lots of people got As in that class. Not many learned basic physics.

              A lot of people don’t get further than that in math, because they don’t understand the basic building blocks. Plugging numbers into a formula isn’t worthless, and a calculator helps that. But it doesn’t help you once the problem changes a little instead of just the inputs.

              • ulterno@lemmy.kde.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                15 days ago

                You were allowed a page “cheat sheet” to use on the exams, and the exams were pretty much the same problems with the numbers changed.

                That seems like the worst way of making an exam.
                In case the cheat sheet were not there, it would at least be testing something (i.e. how many formulae you memorised), albeit useless.

                When you let students have a cheat sheet, it is supposed to be obvious that this will be a HOTS (higher order thinking skills) test. Well, maybe for teachers lacking said HOTS, it was not obvious.

                • conciselyverbose@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  15 days ago

                  Yeah, I’m all for “you don’t have to memorize every formula”, but I would have just provided a generic formula sheet and made people at least get from there to the solutions, even if you did the same basic problems.

                  It’s hard for me to objectively comment on the difficulty of the material because I’d already had most of the material in high school physics and it was pretty much just basic algebra to get from any of the formulas provided to the solution, but the people following the sheets took the full hour to do the exams that took me 5 minutes without the silly cheat sheet, because they didn’t learn anything in the class.

                  (Edit: the wild part is that a sizable number of people in the class actually studied, like multiple hours, for that test with the exact same problems we had in class with numbers changed, while also bringing the cheat sheet where they had the full step by step solutions in for the test.)

      • bluewing@lemm.ee
        link
        fedilink
        English
        arrow-up
        9
        ·
        15 days ago

        As someone who has taught math to students in a classroom, unless you have at least a basic understanding of HOW the numbers are supposed to work, the tool - a calculator - is useless. While getting the correct answer is important, I was more concerned with HOW you got that answer. Because if you know how you got that answer, then your ability to get the correct answer skyrockets.

        Because doing it your way leads to blindly relying on AI and believing those answers are always right. Because it’s just a tool right?

        • LifeInMultipleChoice@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          15 days ago

          No where did I say a kid shouldn’t learn how to do it. I said it’s a tool, I’m saying it’s a dumb argument/discussion.

          If I said, students who only ever used a calculator didn’t do as well on a test where calculators werent allowed, you would say " yeah no shit"

          This is just an anti technology, anti new generation separation piece that divides people and will ultimately create a rifts that help us ignore real problems.

      • EatATaco@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        15 days ago

        The main goal of learning is learning how to learn, or learning how to figure new things out. If “a tool can do it better, so there is no point in not allowing it” was the metric, we would be doing a disservice because no one would understand why things work the way they do, and thus be less equipped to further our knowledge.

        This is why I think common core, at least for math, is such a good thing because it teaches you methods that help you intuitively figure out how to get to the answer, rather than some mindless set of steps that gets you to the answer.

        • blackbirdbiryani@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          15 days ago

          Because a huge part about learning is actually figuring out how to extract/summarise information from imperfect sources to solve related problems.

          If you use CHATGPT as a crutch because you’re too lazy to read between the lines and infer meaning from text, then you’re not exercising that particular skill.

          • billwashere@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            15 days ago

            I don’t disagree, but thats like saying using a calculator will hurt you in understanding higher order math. It’s a tool, not a crutch. I’ve used it many times to help me understand concepts just out of reach. I don’t trust anything LLMs implicitly but it can and does help me.

            • WordBox@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              ·
              15 days ago

              Congrats but there’s a reason teachers ban calculators… And it’s not always for the pain.

              • assassin_aragorn@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                15 days ago

                In some cases I’d argue, as an engineer, that having no calculator makes students better at advanced math and problem solving. It forces you to work with the variables and understand how to do the derivation. You learn a lot more manipulating the ideal gas formula as variables and then plugging in numbers at the end, versus adding numbers to start with. You start to implicitly understand the direct and inverse relationships with variables.

                Plus, learning to directly use variables is very helpful for coding. And it makes problem solving much more of a focus. I once didn’t have enough time left in an exam to come to a final numerical answer, so I instead wrote out exactly what steps I would take to get the answer – which included doing some graphical solutions on a graphing calculator. I wrote how to use all the results, and I ended up with full credit for the question.

                To me, that is the ultimate goal of math and problem solving education. The student should be able to describe how to solve the problem even without the tools to find the exact answer.

              • billwashere@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                15 days ago

                Take a college physics test without a calculator if you wanna talk about pain. And I doubt you could find a single person who could calculate trig functions or logarithms long hand. At some point you move past the point to prove you can do arithmetic. It’s just not necessary.

                The real interesting thing here is whether an LLM is useful as a study aid. It looks like there is more research necessary. But an LLM is not smart. It’s a complicated next word predictor and they have been known to go off the rails for sure. And this article suggests its not as useful and you might think for new learners.

                • WordBox@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 days ago

                  Chem is a long forgotten memory, but trig… It’s a matter of precision to do by hand. Very far from impossible… I’m pretty sure you learn about precision before trig… maybe algebra I or ii. E.g. can you accept pi as 3.14? Or 3.14xxxxxxxxxxxxxxxxxxxxxxxxxx

                  Trig is just rad with pi.

              • Skates@feddit.nl
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                7
                ·
                15 days ago

                There are many reasons for why some teachers do some things.

                We should not forget that one of them is “because they’re useless cunts who have no idea what they’re doing and they’re just powertripping their way through some kids’ education until the next paycheck”.

                • Zoot@reddthat.com
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  edit-2
                  15 days ago

                  Not knowing how to add 6 + 8 just because a calculator is always available isn’t okay.

                  I have friends in my DnD session who have to count the numbers together on their fingers. I feel bad for the person. Don’t blame a teacher for wanting you to be a smarter more efficient and productive person, for banning a calculator.

  • maegul (he/they)@lemmy.ml
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    4
    ·
    16 days ago

    Yea, this highlights a fundamental tension I think: sometimes, perhaps oftentimes, the point of doing something is the doing itself, not the result.

    Tech is hyper focused on removing the “doing” and reproducing the result. Now that it’s trying to put itself into the “thinking” part of human work, this tension is making itself unavoidable.

    I think we can all take it as a given that we don’t want to hand total control to machines, simply because of accountability issues. Which means we want a human “in the loop” to ensure things stay sensible. But the ability of that human to keep things sensible requires skills, experience and insight. And all of the focus our education system now has on grades and certificates has lead us astray into thinking that the practice and experience doesn’t mean that much. In a way the labour market and employers are relevant here in their insistence on experience (to the point of absurdity sometimes).

    Bottom line is that we humans are doing machines, and we learn through practice and experience, in ways I suspect much closer to building intuitions. Being stuck on a problem, being confused and getting things wrong are all part of this experience. Making it easier to get the right answer is not making education better. LLMs likely have no good role to play in education and I wouldn’t be surprised if banning them outright in what may become a harshly fought battle isn’t too far away.

    All that being said, I also think LLMs raise questions about what it is we’re doing with our education and tests and whether the simple response to their existence is to conclude that anything an LLM can easily do well isn’t worth assessing. Of course, as I’ve said above, that’s likely manifestly rubbish … building up an intelligent and capable human likely requires getting them to do things an LLM could easily do. But the question still stands I think about whether we need to also find a way to focus more on the less mechanical parts of human intelligence and education.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      15 days ago

      LLMs likely have no good role to play in education and I wouldn’t be surprised if banning them outright in what may become a harshly fought battle isn’t too far away.

      While I agree that LLMs have no place in education, you’re not going to be able to do more than just ban them in class unfortunately. Students will be able to use them at home, and the alleged “LLM detection” applications are no better than throwing a dart at the wall. You may catch a couple students, but you’re going to falsely accuse many more. The only surefire way to catch them is them being stupid and not bothering to edit what they turn in.

      • maegul (he/they)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 days ago

        Yea I know, which is why I said it may become a harsh battle. Not being in education, it really seems like a difficult situation. My broader point about the harsh battle was that if it becomes well known that LLMs are bad for a child’s development, then there’ll be a good amount of anxiety from parents etc.

  • Praise Idleness@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    12
    ·
    16 days ago

    It’s not about using it. It’s about using it ina helpful and constructive manner. Obviously no one’s going to learn anything if all they do is blatantly asking for an answer and writings.

    LLM has been a wonderful tool for me to further understand various topics.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      9
      ·
      16 days ago

      This! Don’t blame the tech, blame the grown ups not able to teach the young how to use tech!

      • jacksilver@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        The study is still valuable, this is a math class not a technology class, so understanding it’s impact is important.

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          15 days ago

          Yea, did not read that promptengineered chatGPT was better than non chatGPT class 😄 but I guess that proofs my point as well, because if students in group with normal chatGPT were teached how to prompt normal ChatGPT so that it answer in a more teacher style, I bet they would have similar results as students with promtengineered chatGPT

      • MBM@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 days ago

        Can I blame the tech for using massive amounts of electricity, making e.g. Ireland use more fossil fuels again?

    • trollbearpig@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      4
      ·
      edit-2
      15 days ago

      If you actually read the article you will see that they tested both allowing the students to ask for answers from the LLM, and then limiting the students to just ask for guidance from the LLM. In the first case the students did significantly worse than their peers that didn’t use the LLM. In the second one they performed the same as students who didn’t use it. So, if the results of this study can be replicated, this shows that LLMs are at best useless for learning and most likely harmful. Most students are not going to limit their use of LLMs for guidance.

      You AI shills are just ridiculous, you defend this technology without even bothering to read the points under discussion. Or maybe you read an LLM generated summary? Hahahaha. In any case, do better man.

    • Ledivin@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      7
      ·
      15 days ago

      Obviously no one’s going to learn anything if all they do is blatantly asking for an answer and writings.

      You should try reading the article instead of just the headline.

      • Praise Idleness@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        15 days ago

        The researchers believe the problem is that students are using the chatbot as a “crutch.” When they analyzed the questions that students typed into ChatGPT, students often simply asked for the answer. Students were not building the skills that come from solving the problems themselves.

        I did? What are you trying to say?

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      15 days ago

      If you’d have read tye article, you would have learned that there were three groups, one with no gpt, one where they just had gpt access, and another gpt that would only give hints and clues to the answer, but wouldn’t directly give it.

      That third group tied the first group in test scores. The issue was that chat gpt is dumb and was often giving incorrect instructions on how to solve the answer, or came up with the wrong answer. I’m sure if gpt were capable of not giving the answer away and actually correctly giving instructions on how to solve each problem, that group would have beaten the no gpt group, easily.

  • flerp@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    15 days ago

    Like any tool, it depends how you use it. I have been learning a lot of math recently and have been chatting with AI to increase my understanding of the concepts. There are times when the textbook shows some steps that I don’t understand why they’re happening and I’ve questioned AI about it. Sometimes it takes a few tries of asking until you figure out the right question to ask to get the right answer you need, but that process of thinking helps you along the way anyways by crystallizing in your brain what exactly it is that you don’t understand.

    I have found it to be a very helpful tool in my educational path. However I am learning things because I want to understand them, not because I have to pass a test and that determination in me to want to understand is a big difference. Just getting hints to help you solve the problem might not really help in the long run, but it you’re actually curious about what you’re learning and focus on getting a deeper understanding of why and how something works rather than just getting the right answer, it can be a very useful tool.

    • Rekorse@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      4
      ·
      15 days ago

      Why are you so confident that the things you are learning from AI are correct? Are you just using it to gather other sources to review by hand or are you trying to have conversations with the AI?

      We’ve all seen AI get the correct answer but the show your work part is nonsense, or vice versa. How do you verify what AI outputs to you?

      • GaMEChld@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        15 days ago

        You check it’s work. I used it to calculate efficiency in a factory game and went through and made corrections to inconsistencies I spotted. Always check it’s work.

        • flerp@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          15 days ago

          Exactly. It’s a helpful tool but it needs to be used responsibly. Writing it off completely is as bad a take as blindly accepting everything it spits out.

      • pflanzenregal@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        15 days ago

        I use it for explaining stuff when studying for uni and I do it like this: If I don’t understand e.g. a definition, I ask an LLM to explain it, read the original definition again and see if it makes sense.

        This is an informal approach, but if the definition is sufficiently complex, false answers are unlikely to lead to an understanding. Not impossible ofc, so always be wary.

        For context: I’m studying computer science, so lots of math and theoretical computer science.

      • flerp@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        I’m not at all confident in the answers directly. I’ve gotten plenty of wrong answers form AI and I’ve gotten plenty of correct answers. If anything it’s just more practice for critical thinking skills, separating what is true and what isn’t.

        When it comes to math though, it’s pretty straightforward, I’m just looking for context on some steps in the problems, maybe reminders of things I learned years ago and have forgotten, that sort of thing. As I said, I’m interested in actually understanding the stuff that I’m learning because I am using it for the things I’m working on so I’m mainly reading through textbooks and using AI as well as other sources online to round out my understanding of the concepts. If I’m getting the right answers and the things I am doing are working, it’s a good indicator I’m on the right path.

        It’s not like I’m doing cutting edge physics or medical research where mistakes could cause lives.

        • Rekorse@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          Its sort of similar to saying poppy production overall is pretty negative, but if smart critical people use it sparingly and apprehensively, opiates could be of great benefit to that person.

          Thats all well and good and all but AI is not being developed to help critical thinkers research slightly easier, its being created to reduce the amount of money companies spend on humans.

          Until regulations are in place to guide the development of the technology in useful ways then I dont know any of it should be permitted. What’s the rush for anyways?

          • flerp@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 days ago

            Well I’m definitely not pushing for more AI and I like to try to stay nuanced on the topic. Like I mentioned in my first comment I have found it to be a very helpful tool but if used in other ways it could do more harm than good. I’m not involved in making or pushing AI but as long as it is an available tool I’m going to make use of it in the most responsible way I can and talk about how I use it knowing that I can’t control what other people do but maybe I could help some people who are only using it to get answer hints like in the article to find more useful ways of using it.

            When it comes to regulation, yeah I’m all for that. It’s a sad reality that regulation always lags behind and generally doesn’t get implemented until there’s some sort of problem that scares the people in power who are mostly too old to understand what’s happening anyways.

            And as to what’s the rush, I would say a combination of curiosity and good intentions mixed with the worst of capitalism, the carrot of financial gain for success and the stick of financial ruin for failure and I don’t have a clue what percent of the pie each part makes up. I’m not saying it’s a good situation but it’s the way things go and I don’t think anyone alive could stop it. Once something is out of the bag, there ain’t any putting it back.

            Basically I’m with you that it will be used for things that make life worse for people and that sucks, and it would be great if that was not the case but that doesn’t change the fact that I can’t do anything about that and meanwhile it can still be a useful tool and so I’m going to use it the best that I can regardless how others use it because there’s really nothing I can do except keep pushing forward the best I can, just like anyone else.

            • Rekorse@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              15 days ago

              It might just be the difference in perspective. I agree with your assessments if how things are but not how they will be in the future. There are countries that are more responsible in their research, so I know its possible. Its all politics and I dont believe in giving up on social change just yet.

      • Buttons@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        15 days ago

        I, like the OP, was also studying math from a textbook and using GPT4 to help clear things up. GPT4 caught an error in the textbook.

        The LLM doesn’t have a theory of mind, it wont start over and try to explain a concept from a completely new angle, it mostly just repeats the same stuff over and over. Still, once I have figured something out, I can ask the LLM if my ideas are correct and it sometimes makes small corrections.

        Overall, most of my learning came from the textbook, and talking with the LLM about the concepts I had learned helped cement them in my brain. I didn’t learn a whole lot from the LLM directly, but it was good enough to confirm what I learned from the textbook and sometimes correct mistakes.

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        15 days ago

        I personally use it’s answers as a jumping off point to do my own research, or I ask it for sources directly about things and check those out. I frequently use LLMs for learning about topics, but definitely don’t take anything they say at face value.

        For a personal example, I use ChatGPT as my personal Japanese tutor. I use it discuss and break down nuances of various words or sayings, names of certain conjugation forms etc. etc., and it is absolutely not 100% correct, but I can now take the names of things that it gives me in native Japanese that I never would have known and look them up using other resources. Either it’s correct and I find confirming information, or it’s wrong and I can research further independently or ask it follow up questions. It’s certainly not as good as a human native speaker, but for $20 a month and as someone who likes enjoys doing their own research, I fucking love it.

      • KairuByte@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        15 days ago

        I mean, why are you confident the work in textbooks is correct? Both have been proven unreliable, though I will admit LLMs are much more so.

        The way you verify in this instance is actually going through the work yourself after you’ve been shown sources. They are explicitly not saying they take 1+1=3 as law, but instead asking how that was reached and working off that explanation to see if it makes sense and learn more.

        Math is likely the best for this too. You have undeniable truths in math, it’s true, or it’s false. There are no (meaningful) opinions on how addition works other than the correct one.

        • Rekorse@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          15 days ago

          The problem with this style of verification is that there is no authoritative source. Neither the AI nor yourself is capable of verifying for accuracy. The AI also has no expectation of being accurate or revised.

          I don’t see how this is any better than running google searches on reddit or other message boards looking for relevant discussions and basing your knowledge on those.

          If AI was enabling something new that might be worth it but allowing someone to find slightly less/more shitty message board posts 10% more efficiently isnt worth what’s happening. There are countries that are capable of regulation as a field fills out, why can’t america? We banned tiktok in under a month didnt we?

    • Gsus4@mander.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      15 days ago

      Sometimes it leads me wildly astray when I do that, like a really bad tutor…but it is good if you want a refresher and can spot the bullshit on the side. It is good for spotting things that you didnt know before and can factcheck afterwards.

      …but maybe other review papers and textbooks are still better…

  • michaelmrose@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    15 days ago

    TLDR: ChatGPT is terrible at math and most students just ask it the answer. Giving students the ability to ask something that doesn’t know math the answer makes them less capable. An enhanced chatBOT which was pre-fed with questions and correct answers didn’t screw up the learning process in the same fashion but also didn’t help them perform any better on the test because again they just asked it to spoon feed them the answer.

    references

    ChatGPT’s errors also may have been a contributing factor. The chatbot only answered the math problems correctly half of the time. Its arithmetic computations were wrong 8 percent of the time, but the bigger problem was that its step-by-step approach for how to solve a problem was wrong 42 percent of the time.

    The tutoring version of ChatGPT was directly fed the correct solutions and these errors were minimized.

    The researchers believe the problem is that students are using the chatbot as a “crutch.” When they analyzed the questions that students typed into ChatGPT, students often simply asked for the answer.

  • MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    4
    ·
    15 days ago

    Something I’ve noticed with institutional education is that they’re not looking for the factually correct answer, they’re looking for the answer that matches whatever you were told in class. Those two things should not be different, but in my experience, they’re not always the same thing.

    I have no idea if this is a factor here, but it’s something I’ve noticed. I have actually answered questions with a factually wrong answer, because that’s what was taught, just to get the marks.

  • prosp3kt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    15 days ago

    There are a part here that sounds interesting

    The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better.

    Do you think that these students that used ChatGPT can do the exercises “the old fashioned way”? For me it was a nightmare try to resolve a calculus problem just with the trash books that doesn’t explain a damn fuck, I have to go to different resources, wolphram, youtube, but what happened when there was a problem that wasnt well explained in any resource?. I hate openAI, I want to punch Altman in the face. But this doesn’t mean we have to bait this hard in the title.

  • Vanth@reddthat.com
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    13
    ·
    16 days ago

    I’m not entirely sold on the argument I lay out here, but this is where I would start were I to defend using chatGPT in school as they laid out in their experiment.

    It’s a tool. Just like a calculator. If a kid learns and does all their homework with a calculator, then suddenly it’s taken away for a test, of course they will do poorly. Contrary to what we were warned about as kids though, each of us does carry a calculator around in our pocket at nearly all times.

    We’re not far off from having an AI assistant with us 24/7 is feasible. Why not teach kids to use the tools they will have in their pocket for the rest of their lives?

    • filister@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      16 days ago

      I think here you also need to teach your kid not to trust unconditionally this tool and to question the quality of the tool. As well as teaching it how to write better prompts, this is the same like with Google, if you put shitty queries you will get subpar results.

      And believe me I have seen plenty of tech people asking the most lame prompts.

    • Schal330@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      5
      ·
      16 days ago

      As adults we are dubious of the results that AI gives us. We take the answers with a handful of salt and I feel like over the years we have built up a skillset for using search engines for answers and sifting through the results. Kids haven’t got years of experience of this and so they may take what is said to be true and not question the results.

      As you say, the kids should be taught to use the tool properly, and verify the answers. AI is going to be forced onto us whether we like it or not, people should be empowered to use it and not accept what it puts out as gospel.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        ·
        16 days ago

        This is true for the whole internet, not only AI Chatbots. Kids need to get teached that there is BS around. In fact kids had to learn that even pre-internet. Every human has to learn that you can not blindly trust anything, that one has to think critically. This is nothing new. AI chatbots just show how flawed human education is these days.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        15 days ago

        Ask your calculator what 1-(1-1e-99) is and see if it never halucinates (confidently gives an incorrect answer) still.

  • Cornelius_Wangenheim@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    15 days ago

    This isn’t a new issue. Wolfram alpha has been around for 15 years and can easily handle high school level math problems.

    • Zarcher@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      15 days ago

      Except wolfram alpha is able to correctly explain step by step solutions. Which was an aid in my education.

        • PriorityMotif@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          15 days ago

          I can’t remember, but my dad said before he retired he would just pirate Wolfram because he was too old to bother learning whatever they were using. He spent 25 years in academia teaching graduate chem-e before moving to the private sector. He very briefly worked with one of the Wolfram founders at UIUC.

          Edit: I’m thinking of Mathematica, he didn’t want to mess with learning python.