• teft@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    4 months ago

    Google points to your content so others can find it.

    OpenAI scrapes your content to use to make more content.

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      22
      ·
      4 months ago

      That’s not a meaningful distinction, I spent all day using a Copilot search engine because the answers I wanted were scattered across a bunch of different documentation sites.

      It was both using the AI models to interpret my commands (not generation at all), and then only publishes content to me specifically.

      • teft@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 months ago

        I’m talking about the training phase of LLMs.that is the portion that is doing the scraping and generation of copy written data.

        You using an already trained LLM to do some searches is not the same thing.

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          4 months ago

          Depends on what the function was. If the function was to drive ad revenue to your site, then sure, if the function was to get information into the public, then it’s not replacing the function so much as altering and updating it.

          • ℍ𝕂-𝟞𝟝@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            ·
            4 months ago

            If that “altering and updating” means people don’t need to read the original anymore, then it’s not fair use.

            TBH I’m for reigning in copyright substantially, and would be on the shitty text generator company side of this, but only if it makes a precedent and erodes copyright as a whole instead of just creating a carveout if you have a lot of moeny for lawyers.

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              4 months ago

              I generally agree, but I really think people in this thread are being overly dismissive about how useful LLMs are, just because they’re associated with techbros who are often associated with relatively useless stuff like crypto.

              I mean most people still can’t run an LLM on their local machine, which vastly limits what developers can use them for. No video game or open source software can really include them in any core features because most people can’t run them. Give it 3 years when every machine has a dedicated neural chip and devs can start using local LLMs that don’t require a cloud connection and Azure credits and you’ll start seeing actually interesting and inventive uses of them.

              There’s still problems with attributing sources of information but I honestly feel like if all LLMs that were trained on copyrighted data had to be published open source so that anyone could use them it would get us enough of the way there that their benefits would outweigh their costs.