I thought of this recently (anti llm content within)
The reason a lot of companies/people are obsessed with llms and the like, is that it can solve some of their problems (so they think). The thing I noticed, is a LOT of the things they try to force the LLM to fix, could be solved with relatively simple programming.
Things like better searches (seo destroyed this by design, and kagi is about the only usable search engine with easy access), organization (use a database), document management, etc.
People dont fully understand how it all works, so they try to shoehorn the llm to do the work for them (poorly), while learning nothing of value.
LLMs are shoehorned into products for share value reasons, not for usability reasons. Shareholders don’t care if it makes any sense. They want their companies to jump to all the latest trends.
The reason is because company decisions are largely driven by investors, and investors want their big investments in AI to return something.
Investors want constant growth, even if it must be shoehorned.
Venture Capital Driven Development at its finest.
This is true but not the whole picture.
AI is the next space race on nukes. The nation that develops AGI will 100% become the global superpower. Even sub-AGI agents will have the cyber-warfare potential of 1000s of human agents.
Human AI researchers are increasingly doubting our ability to control these programs with regards to transparency about adherence to safety protocols. The notion of programing AI with “Asimov’s 3 laws” is impossible. AI exist to do one thing; get the highest score.
I’m convinced that due to the nature of AGI, it is an extinction level threat.
GPT LLMs “solve problems” as much as cargo cult “build airports”.
See how it’s apparently newsworthy that a simple chess engine on the C64 can beat ShitSkibidi. It was fucking obvious, to us. Like that random.randint(0, 10) is much worse at figuring out the sum of 2 and 4 than just calculating 2+4. However, it was not as obvious to the people that don’t understand how ML/DL fundamentally works.
Similarly, it’s sad to see a lot of projects that have to do with Machine Leaning being essentially killed and made worthless by people just throwing everything at ShitSkibidi instead of generating/collecting training data themselves and training a purpose built model, not text based. I see that in private as well as at work. They want to use “AI” in risk management now. Will that mean they’ll use all their historical data on customers, the risks they identified and the final result to build two or more specific models? Most likely, no. They’ll just throw all data at the internal ShitSkibidi wrapper, expect the resulting data to be usable at all, and then ask it how they should proceed. And then expect humans to actually fact check everything it returned.
Another thing LLMs have going for them is that they’re dirt cheap. Sure, it may only be correct one third of the time, but it costs 5% of the ‘good’ solution. So using the LLM and degrading quality makes business sense.
For now, they are dirt cheap for now and extremely unprofitable. We’re looking at more than a doubling of subscription price before they even approach profitability
By then we’ll have vendor lock-in. WhO cOuLd’Ve FoReSeEn ThAt?
For now. Until all your data is owned by the corp, and you need to pay a ransom for it, and all programmers are long gone because it’s a dead profession in 10 years.
Devil’s advocate: if the problems were could be solved with relatively simple programming, why aren’t why they solved already?
Because SEO is an entire industry on its own with massive lobbying power. It was a mistake to let businesses decide the law.
Because managers and CEOs chose a shitty architect that is the son of some guy instead of hiring someone competent. Bad decisions all the way down until you have to implement that shit or you’re fired.
Because companies don’t understand how that works, and dont want to pay for it. Easier to generate llm slop to band aid fix a problem and create new problems.
deleted by creator
I mean I definitely see it used for things that already solved by non llm software.
you mean they create more temporary workarounds for known problems
LLMs are great. You can tell them a problem with words and they figure out what you mean and solve it. You can not ignore the value of it for normal people.
Some recent examples for me:
I was playing a factory building game and didn’t want to do a spreadsheet by hand for figuring out the optimal amount of which building I have to place to get a wanted output. I told the LLM, copy pasted the wiki for each building. It did some differential equasions and gave me a result and a spreadsheet all in under a minute.
I had to do some math, without knowing the underlying concepts. Describing the situation and problem and giving it all known values was much easier than reading 5 wikipedia articles, figuring out how to break it down, which formulas to use for each step and how to chain them all.
I recently googled for half an hour, crawling through shit articles, reading 50page PDFs, none of which contained the detail I wanted, before giving up asking an AI and clicking on the source it quoted to get my reply. Maybe my search terms sucked, maybe I can’t ask the right question, because I don’t know what I don’t know, but the LLM was able to get it.
Are the problems I described already “solved” more computationally efficiently by other means? Absolutely yes!
Will it be faster and easier for me to throw it at an LLM? Also yes!
Its a good tool in some cases. But I think general lack of understanding of how it works and its shortcomings is going to cause many issues in coming years.
That’s been true ever since the first graduates came out knowing COBOL instead of assembly. Everything keeps getting more bloated and buggy.
I wish I could go back and learn all the old ways, but no one teaches that now. I hate learning things the new way with all the shortcuts and bloat everything has now
There are lots of assembly programming YouTubers. My way of scratching that itch is Arduino / ESP32. The tool chain is all C code but it’s so stripped down there’s not even an OS. It’s just your code on the hardware.
I do love the little arduinos and pis, but I can’t think of any application I’d need one for.
Also is just kind of a bummer since ai can do all the coding, there’s not much purpose in me learning it all from scratch. Back in the day, you HAD to learn that way, and I much prefer that. Everything is much too easy now and I think humanity is going to see the result of that in 15 years with the up and coming generation.
I do love the little arduinos and pis, but I can’t think of any application I’d need one for.
Also is just kind of a bummer since ai can do all the coding, there’s not much purpose in me learning it all from scratch. Back in the day, you HAD to learn that way, and I much prefer that. Everything is much too easy now and I think humanity is going to see the result of that in 15 years with the up and coming generation.