• iopq@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    13
    ·
    edit-2
    6 days ago

    It was also inefficient for a computer to play chess in 1980. Imagine using a hundred watts of energy and a machine that costed thousands of dollars and not being able to beat an average club player.

    Now a phone will cream the world’s best in chess and even go

    Give it twenty years to become good. It will certainly do more stuff with smaller more efficient models as it improves

    • Deflated0ne@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      6 days ago

      Show me the chess machine that caused rolling brown outs and polluted the air and water of a whole city.

      I’ll wait.

      • iopq@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        6 days ago

        Servers have been eating up a significant portion of electricity for years before AI. It’s whether we get something useful out of it that matters

        • Deflated0ne@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          5 days ago

          That’s the hangup isn’t it? It produces nothing of value. Stolen art. Bad code. Even more frustrating phone experiences. Oh and millions of lost jobs and ruined lives.

          It’s the most american way possible that they could have set trillions of dollars on fire short of carpet bombing poor brown people somewhere.

        • CorvidCawder@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          5 days ago

          Not even remotely close to this scale… At most you could compare the energy usage to the miners in the crypto craze, but I’m pretty sure that even that is just a tiny fraction of what’s going on right now.

          • Deflated0ne@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            5 days ago

            Crypto miners wish they could be this inefficient. No literally they do. They’re the “rolling coal” mfers of the internet.

            • CorvidCawder@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              From the blog you quoted yourself:

              Despite improving AI energy efficiency, total energy consumption is likely to increase because of the massive increase in usage. A large portion of the increase in energy consumption between 2024 to 2023 is attributed to AI-related servers. Their usage grew from 2 TWh in 2017 to 40 TWh in 2023. This is a big driver behind the projected scenarios for total US energy consumption, ranging from 325 to 580 TWh (6.7% to 12% of total electricity consumption) in the US by 2028.

              (And likewise, the last graph of predictions for 2028)

              From a quick read of that source, it is unclear to me if it factors in the electricity cost of training the models. It seems to me that it doesn’t.

              I found more information here: https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

              Racks of servers hum along for months, ingesting training data, crunching numbers, and performing computations. This is a time-consuming and expensive process—it’s estimated that training OpenAI’s GPT-4 took over $100 million and consumed 50 gigawatt-hours of energy, enough to power San Francisco for three days.

              So, I’m not sure if those numbers for 2023 paint the full picture. And adoption of AI-powered tools was definitely not as high in 2023 as it is nowadays. So I wouldn’t be surprised if those numbers were much higher than the reported 22.7% of the total server power usage in the US.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        It probably would have if IBM decided that every household in the USA needed to have chess playing compute capacity and made everyone dial up to a singular facility in the middle of a desert where land and taxes were cheap so they could charge everyone a monthly fee for the privilege…

    • outhouseperilous@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      6 days ago

      Not the same. The underlying tech of llm’s has mqssively diminishing returns. You can akready see it, could see it a year ago if you looked. Both in computibg power and required data, and we do jot have enough data, literally have nit created in all of history.

      This is not “ai”, it’s a profoubsly wasteful capitalist party trick.

      Please get off the slop and re-build your brain.

      • iopq@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        6 days ago

        That’s the argument Paul Krugman used to justify his opinion that the internet peaked in 1998.

        You still need to wait for AI to crash and a bunch of research to happen and for the next wave to come. You can’t judge the internet by the dot com crash, it became much more impactful later on

            • outhouseperilous@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              7
              ·
              5 days ago

              One of the major contributors to early versions. Then they did the math and figured out it was a dead end. Yes.

              Also one of the other contributors (weizenbaum i think?) pointed out that not only was it stupid, it was dabgeroys and made people deranged fanatical devotees impervious to reason, who would discard their entire intellect and education to cult about this shit, in a madness no logic could breach. And that’s just from eliza.

                • outhouseperilous@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  5 days ago

                  ~1948-52, yeah

                  Edit: The underlying math and method. Not alone, of course. The main difference between then and now is the data set and some tuning, not a fundamentally new metjod or kibd of thing.

    • Dangerhart@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 days ago

      It seems like you are implying that models will follow Moore’s law, but as someone working on “agents” I don’t see that happening. There is a limitation with how much can be encoded and still produce things that look like coherent responses. Where we would get reliable exponential amounts of training data is another issue. We may get “ai” but it isn’t going to be based on llms

      • iopq@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        6 days ago

        You can’t predict how the next twenty years of research improves on the current techniques because we haven’t done the research.

        Is it going to be specialized agents? Because you don’t need a lot of data to do one task well. Or maybe it’s a lot of data but you keep getting more of it (robot movement? stock market data?)

        • Dangerhart@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          We do already know about model collapse though, genai is essentially eating its own training data. And we do know that you need a TON of data to do even one thing well. Even then it only does well on things strongly matching training data.

          Most people throwing around the word agents have no idea what they mean vs what the people building and promoting them mean. Agents have been around for decades, but what most are building is just using genai for natural language processing to call scripted python flows. The only way to make them look coherent reliably is to remove as much responsibility from the llm as possible. Multi agent systems are just compounding the errors. The current best practice for building agents is “don’t use a llm, if you do don’t build multiple”. We will never get beyond the current techniques essentially being seeded random generators, because that’s what they are intended to be.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      It might, but:

      • Current approaches are displaying exponential demands for more resources with barely noticable “improvements”, so new approaches will be needed.
      • Advances in electronics are getting ever more difficult with increasing drawbacks. In 1980 a processor would likely not even have a heatsink. Now the current edge of that Moore’s law essentially is datacenter only and frequently demands it to be hooked up to water for cooling. SDRAM has joined CPUs in needing more active cooling.
        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          Umm… ok, but that’s a bit beside the point?

          Unless you mean to include those 1980 computers, in which case stockfish won’t run on that… More than about 10 year old home computer would likely be unable to run it.

          • iopq@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            Only because they are not 32 bit so they won’t support enough RAM. But a processor from the 90s could, even though none of the programs of the time were superhuman on commodity hardware.

            The chess programs improved so much that even running with 1000 times slower hardware they are still hilariously stronger than humans

    • jaykrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      Twenty years is a very long time, also “good” is relative. I give it about 2-3 years until we can run a model as powerful as Opus 4.1 on a laptop.

      • iopq@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 days ago

        There will inevitably be a crash in AI and people still forget about it. Then some people will work on innovative techniques and make breakthroughs without fanfare