Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.

The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      4 months ago

      I know you’re joking, but for those who don’t, the headline means “startups” and they just wanted to avoid the overused term.

      Also, yeah actually it’s far easier to have an AI fly a plane than a car. No obstacles, no sudden changes, no little kids running out from behind a cloud-bank, no traffic except during takeoff and landing, and those systems also can be automated more and more.

      In fact, we don’t need “AI” we’ve had autopilots that handle almost all aspects of flight for decades now. The FA-18 Hornet famously has hand-grips by the seat that the pilot is supposed to hold onto during takeoff so they don’t accidentally touch a control.

        • Pennomi@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          4
          ·
          4 months ago

          To be fair, that also falls under the blanket of AI. It’s just not an LLM.

          • leisesprecher@feddit.org
            link
            fedilink
            arrow-up
            8
            arrow-down
            3
            ·
            4 months ago

            No, it does not.

            A deterministic, narrow algorithm that solves exactly one problem is not an AI. Otherwise Pythagoras would count as AI, or any other mathematical formula for that matter.

            Intelligence, even in terms of AI, means being able to solve new problems. An autopilot can’t do anything else than piloting a specific aircraft - and that’s a good thing.

            • wheezy@lemmy.ml
              link
              fedilink
              arrow-up
              6
              ·
              4 months ago

              Not sure why you’re getting downvoted. Well, I guess I do. AI marketing has ruined the meaning of the word to the extent that an if statement is “AI”.

              • leisesprecher@feddit.org
                link
                fedilink
                arrow-up
                4
                ·
                4 months ago

                To a certain extent, yes.

                ChatGPT was never explicitly trained to produce code or translate text, but it can do it. Not super good, but it manages some reasonable output most of the time.

    • abbadon420@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      4 months ago

      That’s terrifying, but I don’t see why my regional train can’t drive on AI in the middle of the night.