- cross-posted to:
- technology@beehaw.org
- technology@lemmy.world
- cross-posted to:
- technology@beehaw.org
- technology@lemmy.world
Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.
The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.
Wait, we have AI flying planes now?
I know you’re joking, but for those who don’t, the headline means “startups” and they just wanted to avoid the overused term.
Also, yeah actually it’s far easier to have an AI fly a plane than a car. No obstacles, no sudden changes, no little kids running out from behind a cloud-bank, no traffic except during takeoff and landing, and those systems also can be automated more and more.
In fact, we don’t need “AI” we’ve had autopilots that handle almost all aspects of flight for decades now. The FA-18 Hornet famously has hand-grips by the seat that the pilot is supposed to hold onto during takeoff so they don’t accidentally touch a control.
what do you think an autopilot is?
A finely refined model based on an actual understanding of physics and not a glorified Markov chain.
To be fair, that also falls under the blanket of AI. It’s just not an LLM.
No, it does not.
A deterministic, narrow algorithm that solves exactly one problem is not an AI. Otherwise Pythagoras would count as AI, or any other mathematical formula for that matter.
Intelligence, even in terms of AI, means being able to solve new problems. An autopilot can’t do anything else than piloting a specific aircraft - and that’s a good thing.
Not sure why you’re getting downvoted. Well, I guess I do. AI marketing has ruined the meaning of the word to the extent that an if statement is “AI”.
Can text generators solve new problems though?
To a certain extent, yes.
ChatGPT was never explicitly trained to produce code or translate text, but it can do it. Not super good, but it manages some reasonable output most of the time.
Mild height and bearing corrections.
That’s terrifying, but I don’t see why my regional train can’t drive on AI in the middle of the night.

It took me a while to realize it is an Otto pilot…
It’s a bubble. This article is by someone realizing that who has yet to move their investments.
Yeah, 95% of AI companies either have no functional product or a chatGPT token account and a prompt.
Most of them could be replaced by a high school student and an N8N instance.
Have you talked to the average high school student these days? Not that the typical AI LLM response is much better, but I honestly feel sorry for the kids.
Probably highly subjective to Schools, States and Families. I’m around a lot of kids in GT and Engineering classes.
You mention N8N. Last week I had a sales VP mention it as well. Could you elaborate on your perspective? I’ve been building databases in BigQuery for the past month and will start utilizing ML for a business need so I probably missed some write up about it.
I’m a program manager, have some small coding experience.
N8n is like Legos of API access you can generate tons of integrations that would have otherwise been imposible with just a few hours of work. We have an issue where people don’t complete their slack profiles. Using n8n I made an integration between our HR software and slack so that it automatically populates most fields without having to bug people.
And after that, it runs a check for what manual thing they are missing and sends them a message.
You put an http block, behind a filter block, behind a slack blog and it handles everything for you.
Would recommend you give it a try, I have it running on the work instance but I also have a local one running in my raspberry that I plan to use to fool around.
N8N is like IFTTT (if this then that)
It’s a mostly codeless solution for wiring things together, meaning you can use semi-non-skilled labor to do somewhat difficult things.
This guy can be a little hard to stomach for some, but he goes into great depth on setting up some n8n use cases, and he doesn’t waste a lot of time doing it. https://www.youtube.com/watch?v=ONgECvZNI3o
Right now, we use it so that if IT puts a certain emoji on a slack message, it makes a jira ticket, letting us know that work has been triaged and created, but if a user does it, it fails.
You could have N8N read a slack channel, or load an RSS feed, or take input from a website, send that data through an LLM prompt to transform the data and then have it or an agent do some work or respond to the input, with minimal need to write code. Really the limits are what services it supports (or your ability to add that API) and your imagination.
In Chuck’s example, he had N8N load several RSS feeds, make thumbnails from them, read the description, and use an LLM to shorten the text without losing meaning and provide a clean list of media to a Discord channel.
https://n8n.io/integrations/google-bigquery/and/openai/
You could define a trigger, say have a chatbot or Slack channel, have it hit your BigQuery, send the data to GPT to make it human-readable, and respond to requests in the channel with some futzing around in logins, flowcharting, and JavaScript variable names…
Yeah every single day the top 5 new products on ProductHunt are AI trash. It’s wild what the bubble has become
Today:

Oh shit, I see elevenlabs on that list, They do tend to stir stuff up.
They used to have paid voice actors training imitations of real celebrities. You could do stuff like search out ship captain and get somebody knocking off Picard.
Looks like they released a music model trained on (paid) licensed material. Even their best sample stuff is kind of marginal, but it is real.
This sounds about right. Figure 50% are just screaming at their employees to use ai and at managers to lower headcount and make it up with ai and such. Then like 25% more buy some companies ai solution and expect sorta the same from there. Then like 15% actually try to identify where ai could be helpful but don’t really listen to feedback and just doggedly move forward. Eventually you get to the ones that identify where it might help and offer options to employees to use it much like any other software where they can request a license and let it grow and help organically and look more to just improve results or productivity.
deleted by creator
How’d that end up? Totally fine, right?
Completely agree.
I’ve got clients who I can see immediate benefits right now, and I’ve got clients where I don’t think it’s a good idea yet. Most of those that could benefit it’s small tweaks to workflow processes to save a few FTE here and there, not these massive scale rollouts we’re seeing.
Unfortunately Microsoft, along with other companies, are selling fully scale sexy to executive when full scale sexy isn’t actually ready yet. What’s available does work for some things, but it’s hard to get an executive team to sign off on a project for testing to save only 10 employees worth of work in a 2000 person company when they’re simultaneously a) worried about it going horribly wrong, and b) worried about falling behind other companies by not going fast enough.
Figure 50% are just screaming at their employees to use ai and at managers to lower headcount and make it up with ai and such.
Immediately imagined it being screamed in this voice:

“Use AI and make it lame!”
Shocked that LLM wrapper slop that isn’t deterministic only has limited use cases. Sam Altman is the biggest con artist of our time
A few years ago we haf these stupid mandatory AI classes all about how AI could help you do your job better. It was supposed to be multiple parts but we never got passed the first one. I think they realized it wouldn’t help most of the company but did leave our bespoke chatbot up for our customers/sales people. It is pretty good at helping with our products but I assume a lot of tuning has been done. I assume if we fed a local AI our data we could make it helpful but none of them have more than a basic knowledge of anything I do on a day to day basis.
Usually fit those chatbots you take a trained model and use RAG, essentially turning the question into a traditional search and asking the LLM to summarize the contents from the result. So it’s frequently a convenient front end to a search engine, which is how it avoid s having to train to produce relevant responses. Is generally just prohibitively difficult in various ways to fine tune LLM through training and manage to get the desired behavior. So it can act like it “knows” about the stuff you do despite zero training if other methods are stuffing the prompts with the right answers.
“Sir how is that going to help me do my job faster?” "Just ask it ‘how do I put in fries I’m the bag faster’ and then do what it says.’
deleted by creator
Good. How do we fix the surviving 5%?
And the other 5% are bullshitting.
deleted by creator
If you care about AI development you would care a lot about the entire industry getting wrecked and set back decades because a bursting bubble and lack of independent funding.
This isn’t just about AI either, when an industry valued nearly half a trillion dollars crashes, it takes with it the ENTIRE FUCKING ECONOMY. I have lived through these bubbles before, this one is bigger and worse than any of them.
You won’t get your AI waifus if you have no job and nobody is hiring developers for AI waifus.
.
Okay but we’re talking about economics here, not the “tool” specifically. I think some people are so hung up on knee-jerk defensiveness of AI that they lose sight of everything but promoting it.
.
I’m sorry if you hope so,
Arguing that there’s an economic scheme threatening AI development and you translate it as “hope” that there is going to be an economy-destroying bubble burst, tells me I won’t get far in this conversation. Maybe figure out if there’s a less emotional/defensive path for looking at all this.
deleted by creator
So you’re one of the “some people” got it.
Decent article with a b. S agenda.
Its aimed at ages. Younger js better according to the article. So instead of focusing on what the issues with fucking a I are, they get to bring in ages.
As soon as they start that shit, you I know its to distract from the real issues
If that’s what you actually intended to type, you might have a stroke.
And here’s another bigot.
Why’s it the most intolerant who are biased against age?
Maga has nothing on you guys when it comes to agism.
Your both wrong
I don’t think you read what the commenter above you wrote. He was commenting on your disjointed thought process not the content of your comment. You’re typing like a crazy person. Take a few minutes or days away from the computer and calm down a bit.
I have an extremely small company where I am the only employee and AI has let me build and do stuff that I would have needed a small team for the quality of what I went from to what I’m able to do now is really great and it’s thanks to ai.
I have no formal training for work experience in coding, but I taught myself python, years ago. Additionally, I don’t work in IT, so I think using ai to code has been extremely beneficial.
So you’re saying you have no professional coding experience, yet you know that a team of professionals couldn’t produce code at the quality you want?
Also, saying “extremely small company” when you mean self employed is weird. It’s fine to have your own company for a business/contracting.
I just hope you actually understand the code that has been produced and no customer or company data is at risk.
Yup. The absolute only useful task I’ve found it to handle is code documentation, which is as fast as it’s allowed to travel in my sphere.
Financially, I earn a really low amount. I e been freelance for a while, but am trying to grow the business, so it’s extremely small.
All the stuff I’m using AI for is just for presentation of internal materials. Nothing critical.
I feel similar.
The AI is great for low value tasks that eat time but aren’t difficult nor require high skill, nor are they risky. That’s the stuff that’s traditionally really difficult to automate.
When I’m actually doing the core parts of my job AI is so awful it’s clear humans are not going anywhere.
But those annoying side tasks need to get done.
I’ve set up a bunch of read only AI tool and that’s enough to speed up huge amounts of work.
That’s great but you’re not what this article and is about. There are tens of thousands of companies popping up left and right with far less ambition to succeed who just want to launch the next “AI powered toaster” and are hoping to make a fast buck and get bought out by a larger company like Google or OpenAI or Meta.
Combine that with growing public skepticism of AI and a general attitude that it’s being overused, the same attitude that makes you knee-jerk defensive about your business, an attitude which is growing and people are losing interest in AI broadly as a feature because it’s being overplayed and over-hyped and not delivering promises. This makes for a bubble that is growing, a bubble with nothing inside that becomes more and more fragile every day. Not everyone is a successful vibe-coder nor can they be.
I think you have blatant security holes that threaten your bottom line and your customers.







