Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.

The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        4 months ago

        To be fair, that also falls under the blanket of AI. It’s just not an LLM.

        • leisesprecher@feddit.org
          link
          fedilink
          arrow-up
          8
          arrow-down
          3
          ·
          4 months ago

          No, it does not.

          A deterministic, narrow algorithm that solves exactly one problem is not an AI. Otherwise Pythagoras would count as AI, or any other mathematical formula for that matter.

          Intelligence, even in terms of AI, means being able to solve new problems. An autopilot can’t do anything else than piloting a specific aircraft - and that’s a good thing.

          • wheezy@lemmy.ml
            link
            fedilink
            arrow-up
            6
            ·
            4 months ago

            Not sure why you’re getting downvoted. Well, I guess I do. AI marketing has ruined the meaning of the word to the extent that an if statement is “AI”.

            • leisesprecher@feddit.org
              link
              fedilink
              arrow-up
              4
              ·
              4 months ago

              To a certain extent, yes.

              ChatGPT was never explicitly trained to produce code or translate text, but it can do it. Not super good, but it manages some reasonable output most of the time.