ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

    • xkforce@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      3
      ·
      edit-2
      1 year ago

      Part of the reason for studies like this is to debunk peoples’ expectations of AI’s capabilities. A lot of people are under the impression that cgatGPT can do ANYTHING and can think and reason when in reality it is a bullshitter that does nothing more than mimic what it thinks a suitable answer looks like. Just like a parrot.

    • PeleSpirit@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      10
      ·
      1 year ago

      Because if it’s able to crawl all of the science pubs, then it would be able to try different combos until it works. Isn’t that how it could/is being used, to test stuff?

      • stephen01king@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        If you want an AI that can create cancer treatment, you need to train it on creating cancer treatment, and not just use one that is trained on general knowledge. Even if you train it on science publications, all it can now reliably do is mimic a science journal since it has not been trained on how to parse the knowledge in the journal itself.

        • PeleSpirit@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Right, but can’t they tell it to also try thousands and thousands of combos that humans could never do? I think ChatGPT is both super amazing and as stupid as a rock at the same time. I thought the vaccine used an AI to do that. I’m obviously clueless, I’m seriously asking.

          • ZodiacSF1969@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            I don’t know about AI, but there are already computer programs that try many different combinations of, for example, chemical structures with known pharmacological properties and then output new drugs that could possibly be used to treat something. Of course you have to verify with research and studies.

            I’m sure there will be AI’s or machine learning programs, if not already, that can do this as well and perhaps improve upon the process. But they would need to be specifically trained for that purpose. ChatGPT is a LLM, it’s made to generate language that fits a given prompt, I would not expect it to be great at creating cancer treatments and I’m not sure why we needed a study to learn that. OpenAI tells you already that the results can be inaccurate or outright wrong.

            • PeleSpirit@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I’m in Seattle and surrounded by people who are techy while not being techy myself, so the innovations they talk about are mind blowing. I thought ChatGPT at first was like all the other tech I heard about. But when you think about it, they would never release that for free first of all, and it would be too powerful for evil people. I was just letting people know what a non-techy thought.