Over half of all tech industry workers view AI as overrated::undefined

    • Sterile_Technique@lemmy.world
      link
      fedilink
      English
      arrow-up
      70
      arrow-down
      5
      ·
      10 months ago

      This is a growing pet peeve of mine. If and when actual AI becomes a thing, it’ll be a major turning point for humanity comparable to things like harnessing fire or electricity.

      …and most people will be confused as fuck. “We’ve had this for years, what’s the big deal?” -_-

        • Admax@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          5
          ·
          edit-2
          10 months ago

          I’ve seen it refered to as AGI bit I think itns wrong. Chat GPT isnt intelligent in the slightest, it only makes guesses on what word is statistically more likely to come up next. There is no thikinking or problem solving involved.

          A while ago I saw an article that with a tittle along the lines of “spark of AGI in ChatGPT 4” because it chose to use a calculator tool when facing a problme that required one. That would be AI (and not AGI). It has a problem, it learns and uses available tools to solve it.

          AGI would be on a whole other level.

          Edit: Grammar

          • thedeadwalking4242@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            2
            ·
            10 months ago

            The argument “it just predicts the most likely next word” while true massively under values what it even means to predict the next word or token. Largely these predictions are based on sentences and ideas the model has trained on from its data sets. It’s pretty intelligent if you think about it. You read a text book then when you apply the knowledge or take a test you use what you read to form a new sentence in relation to the context of the question or problem. For the models “text prediction” to be correct it has to understand certain relationships between complex ideas and objects to some capacity. Yes it absolutely is not as good as human intelligence. But what it’s doing is much more advanced then text to type on your phone keyboard. It’s a step in the right direction, over hyped right now but the hype is funneling cash into research. The models are already getting more advanced. Right now half of what it says is hot garbage but it can be pretty accurate.

            • eronth@lemmy.world
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              4
              ·
              10 months ago

              Right? Like, I, too, predict the next word in my sentence to properly respond to inputs with desired output. Sure I have personality (usually) and interests, but that’s an emergent behavior of my intelligence, not a prerequisite.

              It might not formulate thoughts the way we do, but it absolutely emulates some level of intelligence, artificially.

              • NightAuthor@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                4
                ·
                10 months ago

                I think so many people overrate human intelligence, thus causing them to underrate AI. Don’t get me wrong, our brains are amazing, but they’re also so amazing that they can make crazy cool AI that is also really amazing.

                People just hate the idea of being meat robots, I don’t blame em.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      10 months ago

      Given that AI isn’t purported to be AGI, how do you define AI such that multimodal transformers capable of developing abstract world models as linear representations and trained on unthinkable amounts of human content mirroring a wide array of capabilities which lead to the ability to do things thought to be impossible as recently as three years ago (such as explain jokes not in the training set or solve riddles not in the training set) isn’t “artificial intelligence”?

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      Yup. LLM RAG is just search 2.0 with a GPU.

      For certain use cases it’s incredible, but those use cases shouldn’t be your first idea for a pipeline

    • archon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      THANK YOU! I’ve been saying this a long time, but have just kind of accepted that the definition of AI is no longer what it was.

    • marcos@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      53
      ·
      10 months ago

      It absolutely is AI. A lot of stuff is AI.

      It’s just not that useful.

      • Winged_Hussar@lemmy.world
        link
        fedilink
        English
        arrow-up
        37
        arrow-down
        5
        ·
        edit-2
        10 months ago

        The decision tree my company uses to deny customer claims is not AI despite the business constantly referring to it as such.

        There’s definitely a ton of “AI” that is nothing more than an If/Else statement.

        • thedeadwalking4242@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          10 months ago

          for many years AI referred to that type of technology. It is not infact AGI but AI historically in the technical field refers more towards decision trees, and classification/ linear regression models.

        • Wrench@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          2
          ·
          10 months ago

          That’s basically what video game AI is, and we’re happy enough to call it that

      • bitwolf@lemmy.one
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        10 months ago

        It’s useful at sucking down all the compute we complained crypto used

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          The main difference is that crypto was/is burning huge amounts of energy to run a distributed ponzi scheme. LLMs are at least using energy to create a useful tool (even if there is discussion over how useful they are).

          • bitwolf@lemmy.one
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            I argue AI is much easier to pull a profit from than a currency exchange also 🙂

        • Bakkoda@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          10 months ago

          Yeah it’s funny how that little tidbit just went quietly into the bin not to talked about again.

      • ComradeWeebelo@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        10 months ago

        There are significant differences between statistical models and AI.

        I work for an analytics department at a fortune 100 company. We have a very clear delineation between what constitutes a model and what constitutes an AI.

        • wischi@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          10 months ago

          That’s true. Statistical models are very carefully engineered and tested and current machine learning models are created by throwing a lot of training data at the software and hope for the best that the things that the model learns are not complete bullshit.