• Allero@lemmy.today
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    5
    ·
    edit-2
    5 days ago

    “Bizarre phenomenon”

    “Cannot fully explain it”

    Seriously? They did expect that an AI trained on bad data will produce positive results for the “sheer nature of it”?

    Garbage in, garbage out. If you train AI to be a psychopathic Nazi, it will be a psychopathic Nazi.

    • BigDanishGuy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      23
      ·
      edit-2
      5 days ago

      On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

      Charles Babbage

    • kokolores@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.

      So the AI wasn’t trained to be a „psychopathic Nazi“.

      • Allero@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Aha, I see. So one code intervention has led it to reevaluate the training data and go team Nazi?

        • kokolores@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 days ago

          I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.

          Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.

          In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.