source

The alarmism around AI is just a marketing spin.

As @pluralistic@mamot.fr wrote: that’s “mystical nonsense about spontaneous consciousness arising from applied statistics”.

Real problems we face with AI are:

Ghost labor, erosion of the rights of artists, costs of automation, the climate impact of data-centers and the human impact of biased, opaque, incompetent and unfit algorithmic systems.

https://pluralistic.net/2023/11/27/10-types-of-people/

    • Rikudou_Sage@lemmings.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      23
      ·
      10 months ago

      Yeah, we had to rename AI to AGI, because marketing fuckers decided they’re naming a (very smart) predictive model an AI. I’ve had a dumber version in my phone decades ago, this should’ve never been called AI.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          31
          arrow-down
          8
          ·
          10 months ago

          It’s so annoying how suddenly everyone’s so convinced that “AI” is some highly specific thing that hasn’t been accomplished yet. Artificial intelligence is an extremely broad subject of computer science and things that fit the description have been around for decades. The journal Artificial Intelligence was first published in 1970, 54 years ago.

          We’ve got something that’s passing the frickin’ Turing test now, and all of a sudden the term “artificial intelligence” is too good for that? Bah.

          • Womble@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            3
            ·
            edit-2
            10 months ago

            We dont have anything that passes the Turing test. The test isnt just “does it trick people casually talking to it into thinking its a person” its can it decieve a pannel of experts deliberately try to tease out which one of the “people” they are talking to isnt a human.

            AFAIK no LLM has passed a rigourious test like that.

            • Thorny_Insight@lemm.ee
              link
              fedilink
              English
              arrow-up
              8
              ·
              10 months ago

              GPT4 ironically fails the Turing test by possessing such a wide knowledge on variety of topics that it’s obvious it can’t be a human. Basically it’s too competent to be a human even despite its flaws

              • Cethin@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                10 months ago

                This is my problem with the conversation. It doesn’t “posses knowledge” like we think of with humans. It repeats stuff it’s seen before. It doesn’t understand the context in which it was encountered. It doesn’t know if it came from a sci-fi book or a scientific journal, and it doesn’t understand the difference. It has no knowledge of the world and how things interact. It only appears knowledgeable because it can basically memorize a lot of things, but it doesn’t understand them.

                It’s like cramming for a test. You may pass the test, but it doesn’t mean you actually understand the material. You could only repeat what you read. Knowledge requires actually understanding why the material is what it is.

            • TheBlackLounge@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              Nobody is doing these tests, but it’s not uncommon these days for mistaking something for being AI generated. Even in professional settings, people are hypervigilant.

          • bitwaba@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            Nothing can pass the turing test for me, because I’m pretty sure everyone is a robot including me.

          • wikibot@lemmy.worldB
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            10 months ago

            Here’s the summary for the wikipedia article you mentioned in your comment:

            Artificial Intelligence is a scientific journal on artificial intelligence research. It was established in 1970 and is published by Elsevier. The journal is abstracted and indexed in Scopus and Science Citation Index. The 2021 Impact Factor for this journal is 14. 05 and the 5-Year Impact Factor is 11.

            to opt out, pm me ‘optout’. article | about

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        6
        ·
        10 months ago

        Whenever some dipshit responds to me with “you’re talking about AGI, this is AI”, my only reply is fuck right off.

  • fubo@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    16
    ·
    edit-2
    10 months ago

    Obviously, trivially, blatantly false, because the AI safety people have been at it since long before there was anything to market. Back then, the bullshit criticism was “AI will never be able to understand language or interpret pictures; what harm could it possibly ever do?”

    • jimbo@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      edit-2
      10 months ago

      AI still doesn’t “understand” language or pictures or anything else. It’s little more than statistical analysis based on text and input from humans tagging photos. The fact that we can get some neat output is not indicitive that any understanding is going on behind the scenes.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      10 months ago

      Even today there are a ton of people who simply seem uncapable of playing out the thought experiment about AGI being more competent than humans at virtually everything. They somehow seem to imagine our current generative AI models somehow being a proof that it can never actually deliver and become what the AI safety people have been worried about for decades.

    • RealFknNito@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      10 months ago

      Yeah and now it’s just people fearmongering “AI is the worst thing to happen to artists, musicians, graphic designers - and it steals everyone’s work! And and it might become Skynet! And and…” people who don’t understand technology always seem to do this. They did it with Crypto, did it with NFTs, and now legitimate technologies have to escape a reputational black hole all because scammers and grifters decided to use it maliciously.

      • HarkMahlberg@kbin.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        10 months ago

        people who don’t understand technology always seem to do this. They did it with Crypto, did it with NFTs

        😬

        • RealFknNito@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          10 months ago

          Crypto helped third world countries use a more stable form of currency than their government’s. NFTs are no different from the serial codes on dollar bills except virtual. This is why you people are retarded. You know so little and make moronic comments like these.

  • grayman@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    10
    ·
    edit-2
    10 months ago

    “AI” has been diluted to mean nothing but marketing wank. 99.999% of “AI” is single variable linear regression estimation. Almost all of the rest is multi variable linear regression.