• Halcyon@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    4
    ·
    5 months ago

    Another fear campaign that ultimately aims only at marketing.

    The AI bubble will burst, and it won’t end well for the US economy.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      5 months ago

      That’s one issue.

      Another is that even if you want to do so, it’s a staggeringly difficult enforcement problem.

      What they’re calling for is basically an arms control treaty.

      For those to work, you have to have monitoring and enforcement.

      We have had serious problems even with major arms control treaties in the past.

      https://en.wikipedia.org/wiki/Chemical_Weapons_Convention

      The Chemical Weapons Convention (CWC), officially the Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction, is an arms control treaty administered by the Organisation for the Prohibition of Chemical Weapons (OPCW), an intergovernmental organization based in The Hague, Netherlands. The treaty entered into force on 29 April 1997. It prohibits the use of chemical weapons, and the large-scale development, production, stockpiling, or transfer of chemical weapons or their precursors, except for very limited purposes (research, medical, pharmaceutical or protective). The main obligation of member states under the convention is to effect this prohibition, as well as the destruction of all current chemical weapons. All destruction activities must take place under OPCW verification.

      And then Russia started Novichoking people with the chemical weapons that they theoretically didn’t have.

      Or the Washington Naval Treaty:

      https://en.wikipedia.org/wiki/Washington_Naval_Treaty

      That had plenty of violations.

      And it’s very, very difficult to hide construction of warships, which can only be done by large specialized organizations in specific, geographically-constrained, highly-visible locations.

      But to develop superintelligence, probably all you need is some computer science researchers and some fairly ordinary computers. How can you monitor those, verify that parties involved are actually following the rules?

      You can maybe tamp down on the deployment in datacenters to some degree, especially specialized ones designed to handle high-power parallel compute. But the long pole here is the R&D time. Develop the software, and it’s just a matter of deploying it at scale, and that can be done very quickly, with little time to respond.

      • danzania@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        5 months ago

        But to develop superintelligence, probably all you need is some computer science researchers and some fairly ordinary computers. How can you monitor those, verify that parties involved are actually following the rules?

        I do not think this statement is accurate. It requires many, very expensive, highly specialized computers that are completely spoken for. Monitoring can be done with hardware geolocation and verification of the user. We are probably 1-2 years away from this already, due to the fact that a) US wants to win the AI race vs China but b) the White House is filled with traitors long NVDA.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      8
      ·
      5 months ago

      Yup. We’re in a situation where everyone is thinking “if we don’t, then they will.” Bans are counterproductive. Instead we should be throwing our effort into “if we’re going to do it then we need to do it right.”

      • stealth_cookies@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        This is actually an interesting point I hadn’t thought about or see people considering with regards to the high investment cost into AI LLMs. Who blinks first when it comes to stopping investment into these systems if they don’t prove to be commercially viable (or viable quick enough)? What happens to the West if China holds out for longer and is successful?

  • fruitycoder@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    18
    ·
    5 months ago

    Honestly just ban mass investment, mass power consumption and use of information acquired as part of mass survelince, military usage, etc.

    Like those are all regulated industries. Idc if someone works on it at home, or even a small DC. AGI that can be democratized isn’t the threat, it’s those determined to make a super weapon for world domination. Those plans need to fucking stop regardless if it’s AGI or not

  • Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    6
    ·
    edit-2
    5 months ago

    I genuinely don’t understand the people who are dismissing those sounding the alarm about AGI. That’s like mocking the people who warned against developing nuclear weapons when they were still just a theoretical concept. What are you even saying? “Go ahead with the Manhattan Project - I don’t care, because I in my infinite wisdom know you won’t succeed anyway”?

    Speculating about whether we can actually build such a system, or how long it might take, completely misses the point. The argument isn’t about feasibility - it’s that we shouldn’t even be trying. It’s too fucking dangerous. You can’t put that rabbit back in the hat.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      Sam Altman himself compared GPT-5 to the Manhattan Project.

      The only difference is it’s clearer to most (but definitely not all) people that he is promoting his product when he does it…

    • ErmahgherdDavid@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      5 months ago

      Here’s how I see it: we live in an attention economy where every initiative with a slew of celebrities attached to it is competing for eyeballs and buy in. It adds to information fatigue and analysis paralysis . In a very real sense if we are debating AGI we are not debating the other stuff. There are only so many hours in a day.

      If you take the position that AGI is basically not possible or at least many decades away (I have a background in NLP/AI/LLMs and I take this view - not that it’s relevant in the broader context of my comment) then it makes sense to tell people to focus on solving more pressing issues e.g. nascent fascism, climate collapse, late stage capitalism etc.

      • danzania@infosec.pub
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        5 months ago

        I think this is called the “relative privation” fallacy – it is a false choice. The threat they’re concerned about is human extinction or dystopian lock-in. Even if the probability is low, this is worth discussing.

        • ErmahgherdDavid@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          edit-2
          5 months ago

          Relative privation is when someone dismisses or minimizes a problem simply because worse problems exist: “You can’t complain about X when Y exists.”

          I’m talking about the practical reality that you must prioritize among legitimate problems. If you’re marooned at sea in a sinking ship you need to repair the hull before you try to fix the engines in order to get home.

          It’s perfectly valid to say “I can’t focus on everything so I will focus on the things that provide the biggest and most tangible improvement to my situation first”. It’s fallacious to say “Because worse things exist, AGI concerns doesn’t matter.”

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    5 months ago

    The current point of our human civilization is like cave men 10,000 years ago being given machine guns and hand grenades

    What do you think are we going to do with all this new power?

  • SaraTonin@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 months ago

    Okay, firstly, if we’re going to get superintelligent AIs, it’s not going to happen from better LLMs. Secondly, we seem to have already reached the limits of LLMs, so even if that were how to get there it doesn’t seem possible. Thirdly, this is an odd problem to list: “human economic obsolescence”.

    What does that actually mean? Feels difficult to read it any way other than saying that money will become obsolete. Which…good? But I suppose not if you’re already a billionaire. Because how else would people know that you won capitalism?

    • davetortoise@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      AI is pretty unique as far as technological innovation goes because of how it interacts with labour markets.

      Most labour-saving technologies increase the productivity of labour, increasing its value, which generally makes ordinary people a little better off, for a time at least.

      AI is different because instead of enhancing human labour, it competes with it, driving down the value of labour. This makes workers worse off.

      This problem is of course unique to an economic system where workers must sell their labour to others.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    5 months ago

    Superintelligence — a hypothetical form of AI that surpasses human intelligence — has become a buzzword in the AI race between giants like Meta and OpenAI.

    Thank you MSNBC for doing the bare minimum and reminding people that this is hypothetical (read: science fiction)

    • danzania@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 months ago

      ChatGPT would have been science fiction 5 years ago. We are already living in science fiction times, friend.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        Artificial intelligence has been something people have been sounding the alarm about since the 50s. We call it AGI now, since “AI” got ruined by marketers 60 years later.

        We won’t get there with transformer models, so what exactly do the people promoting them actually propose? It just makes the Big Tech companies look like they have a better product than they do.

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      this is hypothetical

      And we wish to keep it that way - thus the people advocating for halting the development.

  • merdaverse@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    In the list of immediate threats to humanity, AI superintelligence is very much at the bottom. At the top is Human superstupidity

  • muusemuuse@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    It doesn’t matter. It’s too late. The goal is to build AI up enough that the poor can starve and die off in the coming recession while the rich just rely on AI to replace the humans they don’t want to pay.

    We are doomed for the crimes of not being rich and not killing off the rich.

    • Alaknár@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 months ago

      We’re probably some two or three decades before any early prototypes are even conceivable, mate.