• chaogomu@kbin.social
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    There’s also the issue of model collapse, when the AI is trained on data generated by AI, the errors and hallucinations start to compound until all you have left is gibberish. We’re about halfway there.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      ChatGPT is trained on data with a cutoff in September 2021. It’s not training on AI-generated data.

      Even if some AI-generated data is included, as long as it’s reasonably curated and it’s mixed with non-AI data model collapse can be avoided.

      “Model collapse” is starting to feel like just a keyword for “this AI isn’t as good as I wanted.”

    • _danny@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      I feel like you’re undereducated on how and when AI models are trained. Especially for the gpt model, it’s not “constantly learning” like other models. It’s being tweaked in discreet increments by developers trying to cover their ass, and get it to less frequently say things they can be sued for.

      Also, AI are already training other AI, that’s kinda how AI are made… There’s an AI that detects how well a given phrase follows another phrase, and that’s used to train the part of the AI you interact with. (arguably they are part of the same whole, depending on how you view the architecture)

      CGP gray has a good into video on how bots learn, it’s pretty outdated and not really applicable to how LLMs learn, but the general idea is still there.