• HarkMahlberg@kbin.earth
    link
    fedilink
    arrow-up
    17
    ·
    10 days ago

    Beyond the copyright issues and energy issues, AI does some serious damage to your ability to do actual hard research. And I’m not just talking about “AI brain.”

    Let’s say you’re looking to solve a programming problem. If you use a search engine and look up the question or a string of keywords, what do you usually do? You look through each link that comes up and judge books by their covers (to an extent). “Do these look like reputable sites? Have I heard of any of them before?” You scroll click a bunch of them and read through them. Now you evaluate their contents. “Have I already tried this info? Oh this answer is from 15 years ago, it might be outdated.” Then you pare down your links to a smaller number and try the solution each one provides, one at a time.

    Now let’s say you use an AI to do the same thing. You pray to the Oracle, and the Oracle responds with a single answer. It’s a total soup of its training data. You can’t tell where specifically it got any of this info. You just have to trust it on faith. You try it, maybe it works, maybe it doesn’t. If it doesn’t, you have to write a new prayer try again.

    Even running a local model means you can’t discern the source material from the output. This isn’t Garbage In Garbage Out, but Stew In Soup Out. You can feed an AI a corpus of perfectly useful information, but it will churn everthing into a single liquidy mass at the end. You can’t be critical about the output, because there’s nothing to critique but a homogenous answer. And because the process is destructive, you can’t un-soup the output. You’ve robbed yourself of the ability to learn from the input, and put all your faith into the Oracle.

    • Skullgrid@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      7
      ·
      edit-2
      10 days ago

      The topic is : using AIs for game dev.

      1. I’m pretty sure that generating placeholder art isn’t going to ruin my ability to research
      2. AIs need to be used TAKING THEIR FLAWS INTO ACCOUNT and for very specific things.

      I’m just going to be upfront: AI haters don’t know the actual way this shit works except that by existing, LLMS drain oceans and create more global warming than the entire petrol industry, and AI bros are filling their codebases with junk code that’s going to explode in their faces from anywhere between 6 months to 3 years.

      There is a sane take : use AIs sparingly, taking their flaws into consideration, for placeholder work, or once you obtain a training base on content you are allowed to use. Run it locally, and use renewable sources for electricity.

      • HarkMahlberg@kbin.earth
        link
        fedilink
        arrow-up
        2
        ·
        9 days ago

        Wild to see you call for a “sane take” when you strawman the actual water problem into “draining the oceans.”

        Local residents with nearby data centers aren’t being told to take fewer showers with salt water from the ocean.

        • Skullgrid@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          9 days ago

          Is that a problem with the existence of llms as a technology, or shitty corporations working with corrupt governments in starving local people of resources to turn a quick buck?

          If you are allowing a data center to be built, you need to make sure you have power etc to build it without negativitely impacting the local people. It’s not the fault of an LLM that they fucked this shit up.

          • very_well_lost@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            9 days ago

            Are you really gonna use the “guns don’t kill people, people kill people” argument to defend LLMS?

            Let’s not forget that the first ‘L’ stands for “large”. These things do not exist without massive, power and resource hungry data centers. You can’t just say “Blame government mismanagement! Blame corporate greed!” without acknowledging that LLMs cease to exist without those things.

            And even with all of those resources behind it, the technology is still only marginally useful at best. LLMs still hallucinate, they still confidently distribute misinformation, they still contribute to mental health crises in vulnerable individuals, and no one really has any idea how to stop those things from happening.

            What tangible benefit is there to LLMs that justifies their absurd cost? Honestly?

            • Skullgrid@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              edit-2
              9 days ago

              making up deficiencies in your own artistic and linguistic skills , getting easy starting points for coding solutions.

              LLMs still hallucinate,

              Emergent behaviour can be useful in coming up with new ideas that you were not expecting and areas to explore

              they still confidently distribute misinformation,

              yeah, that’s been a problem since language, if you want a statement more close to the topic at hand, the printing press.

              they still contribute to mental health crises in vulnerable individuals, and no one really has any idea how to stop those things from happening.

              so does the fucking internet.

              Are you really gonna use the “guns don’t kill people, people kill people” argument to defend LLMS?

              chad.jpg

    • Mika@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      7
      ·
      10 days ago

      you can’t be critical about the answer

      You actually can, and you should be. And the process is not destructive since you can always undo in tools like cursor, or discard in git.

      Besides, you can steer a good coding LLM in a right direction. The better you understand what are you doing - the better.

      • HarkMahlberg@kbin.earth
        link
        fedilink
        arrow-up
        6
        ·
        9 days ago

        You misunderstood, I wasn’t saying you can’t Ctrl Z after using the output, but that the process of training an AI on a corpus yields a black box. This process can’t be reverse engineered to see how it came up with it’s answers.

        It can’t tell you how much of one source it used over another. It can’t tell you what it’s priorities are in evaluating data… not without the risk of hallucinating on you when you ask it.