It always feels like some form of VR tech comes out with some sort of fanfare and with a promise it will take over the world, but it never does.
Perpetual motion
Cold fusion
Flying cars
Teleportation
Heavy-lift dirigiblesI’m going to get downvoted for this
Open source has its place, but the FOSS community needs to wake up to the fact that documentation, UX, ergonomics, and (especially) accessibility aren’t just nice-to-haves. Every year has been “The Year of the Linux Desktop™” but it never takes off, and it never will until more people who aren’t developers get involved.
“Smart” TVs. Somehow they have replaced normal televisions despite being barely usable, laggy, DRM infested garbage.
Man, I haven’t really faced this yet. My flat screen is a really old Panasonic plasma and it is"barely" smart. It came with a few apps on it. I ignore them and use it as a dumb monitor, running everything through my receiver instead. When it dies, I don’t know what I’ll do.
The concept confuses and infuriates me. I’m just going to stick a game console or Blu-ray player on it, but you can’t buy a TV these days that doesn’t have a bloated “smart” interface. The solution, for me at least, is a computer monitor. I don’t need or want a very large screen, and a monitor does exactly one thing, and that’s show me what I’ve plugged into it.
They are surveilance- and ad delivery platorms. The user experience is as bad as the consumer can tolerate. They work as intended.
I don’t buy it, they would be better at whatever nefarious crap if they didn’t take a full second to navigate between menu options, or had a UI designed by someone competent. Even people who have subscriptions to the services the TV is a gateway to have a hard time figuring out how to use them. These things aren’t even good at exploitation, they are decaying technology.
So I have a contentious one. Quantum computers. (I am actually a physicist, and specialised in qunatum back in uni days, but now work mainly in in medical and nuclear physics.)
Most of the “working”: quantum computers are experiments where the outcome has already been decided and the factoring they do can be performed on 8 bit computers or even a dog.
https://eprint.iacr.org/2025/1237.pdf “Replication of Quantum Factorisation Records with an
8-bit Home Computer, an Abacus, and a Dog”
This paper is a hilarious explanation of the tricks being pulled to get published. But then again, it is a nascent technology, and like fusion, I believe it will one day be a world changing technology, but in it’s current state is a failure on account of the bullshittery being published. Then again such publications are still useful in the grand scheme of developing the technology, hence why the article I cited is good humoured but still making the point that we need to improve our standards. Plus who doesnt like it when an article includes dogs.
Anyway, my point is, some technologies will be constant failures, but that doesn’t mean we should stop.
A cure for cancer is a perfect example. Research has been going on for a century and cumulatively amassed 100s of billions of dollars of funding. It has failed constantly to find a cure, but our understanding of the disease, treatment, how to conduct research, and prevention have all massively increased.They didn’t thank Scribble (the dog) in their acknowledgements section. 1/10 paper, would only look at the contained dog picture
A cure for cancer is a perfect example. Research has been going on for a century and cumulatively amassed 100s of billions of dollars of funding. It has failed constantly to find a cure, but our understanding of the disease, treatment, how to conduct research, and prevention have all massively increased.
Cancer != cancer. There are hundreds of types of cancer. Many types meant certain death 50 years ago and can be treated and cured now with high reliability. “The” cure for cancer likely doesn’t exist because “the” cancer is not a singular thing, but a categorization for a type of diseases.
yeah it is like saying a cure for virus or a cure for bacteria. Its like why we don’t have a cold vaccine and flue ones have to be redone every year.
Thank you for helping educate on this. I live in the best time in history to have the cancer I have. I’ll be able to live a pretty full life with what would have been a steady decline into an immobile death, were this 30 years ago.
Yes of course. There are also many types of quantum computer and applications, multiple types of fusion, and cancers.
Exactly, a “cure for cancer” is like “stopping accidents”.
There’s still cancer, and there are still accidents. But on both fields it’s much better to be alive in 2026 than in 1926
We have also produced treatments that work to some extent for some forms of cancer.
We don’t have a 100% reliable silver bullet that deals with everything with a simple five minute shot, but…
AI.
How is AI a failure exactly?
AI is great, LLMs are useless.
They’re massively expensive, yet nobody is willing to pay for it, so it’s a gigantic money burning machine.
They create inconsistent results by their very nature, so you can, definitionally, never rely on them.
It’s an inherent safety nightmare because it can’t, by its nature, distinguish between instructions and data.
None of the company desperately trying to sell LLMs have even an idea of how to ever make a profit off of these things.
LLMs are AI. ChatGPT alone has over 800 million weekly users. If just one percent of them are paying, that’s 8 million paying customers. That’s not “nobody.”
That sheer volume of weekly users also shows the demand is clearly there, so I don’t get where the “useless” claim comes from. I use one to correct my writing all the time - including this very post - and it does a pretty damn good job at it.
Relying on an LLM for factual answers is a user error, not a failure of the underlying technology. An LLM is a chatbot that generates natural-sounding language. It was never designed to spit out facts. The fact that it often does anyway is honestly kind of amazing - but that’s a happy accident, not an intentional design choice.
ChatGPT alone has over 800 million weekly users. If just one percent of them are paying, that’s 8 million paying customers. That’s not “nobody.”
Yes, it is. A 1% conversion rate is utterly pathetic and OpenAI should be covering its face in embarrassment if that’s. I think WinRAR might have a worse conversion rate, but I can’t think of any legitimate company that bad. 5% would be a reason to cry openly and beg for more people.
Edit: it seems like reality is closer to 2%, or 4% if you include the legacy 1 dollar subscribers.
That sheer volume of weekly users also shows the demand is clearly there,
Demand is based on cost. OpenAI is losing money on even its most expensive subscriptions, including the 230 euro pro subscription. Would you use it if you had to pay 10 bucks per day? Would anyone else?
If they handed out free overcooked rice delivered to your door, there would be a massive demand for overcooked rice. If they charged you a hundred bucks per month, demand would plummet.
Relying on an LLM for factual answers is a user error, not a failure of the underlying technology.
That’s literally what it’s being marketed as. It’s on literally every single page openAI and its competitors publish. It’s the only remotely marketable usecase they have, because these things are insanely expensive to run, and they’re only getting MORE expensive.
It can’t really reliably do any of the stuff which it is marketed as being able to do, and it is a huge security risk. Not to mention the huge climate issues for something with so little gain.
It’s quite bad at what we’re told it’s supposed to do (producing reliably correct responses), hallucinating up to 40% of the time.
It’s also quite bad at not doing what it’s not supposed to. Meaning the “guardrails” that are supposed to prevent it from giving harmful information can usually be circumvented by rephrasing the prompt or some form of “social” engineering.
And on top of all that we don’t actually understand how they work in a fundamental level. We don’t know how LLMs “reason” and there’s every reason to assume they don’t actually understand what they’re saying. Any attempt to have the LLM explain its reasoning is of course for naught, as the same logic applies. It just makes up something that approximately sounds like a suitable line of reasoning.
Even for comparatively trivial networks, like the ones used for written number recognition, that we can visualise entirely, it’s difficult to tell how the conclusion is reached. Some neurons seem to detect certain patterns, others seem to be just noise.You seem to be focusing on LLMs specifically, which are just one subcategory of AI. Those terms aren’t synonymous.
The main issue here seems to be mostly a failure to meet user expectations rather than the underlying technology failing at what it’s actually designed for. LLM stands for Large Language Model. It generates natural-sounding responses to prompts - and it does this exceptionally well.
If people treat it like AGI - which it’s not - then of course it’ll let them down. That’s like cursing cruise control for driving you into a ditch. It’s actually kind of amazing that an LLM gets any answers right at all. That’s just a side effect of being trained on a ton of correct information - not what it’s designed to do. So it’s like cruise control that’s also a somewhat decent driver, people forget what it really is, start relying on it for steering, and then complain their “autopilot” failed when all they ever had was cruise control.
I don’t follow AI company claims super closely so I can’t comment much on that. All I know is plenty of them have said reaching AGI is their end goal, but I haven’t heard anyone actually claim their LLM is generally intelligent.
I know they’re not synonymous. But at some point someone left the marketing monkeys in charge of communication.
My point is that our current “AI” is inadequate at what we’re told is its purpose and should it ever become adequate (which the current architecture shows no sign of being capable) we’re in a lot of trouble because then we’ll have no way to control an intelligence vastly superior to our own.So our current position on that journey is bad and the stated destination is undesirable, so it would be in our net interest to stop walking.
If people treat it like AGI - which it’s not - then of course it’ll let them down.
People treat it like the thing it’s being sold as. The LLM boosters are desperately trying to sell LLMs as coworkers and assistants and problemsolvers.
I don’t personally remember hearing any AI company leader ever claim their LLM is generally intelligent - and even the LLM itself will straight-up tell you it isn’t and shouldn’t be blindly trusted.
I think the main issue is that when a layperson hears “AI,” they instantly picture AGI. We’re just not properly educated on the terminology here.
I don’t personally remember hearing any AI company leader ever claim their LLM is generally intelligent
Not directly. They merely claim it’s a coworker that can complete complex tasks, or an assistant that can do anything you ask.
The public isn’t just failing here, they’re actively being lied to by the people attempting to sell the service.
For example, here’s Sammy saying exactly that: https://www.technologyreview.com/2024/05/01/1091979/sam-altman-says-helpful-agents-are-poised-to-become-ais-killer-function/
And here’s him again, recently, trying to push the “our product is super powerful guys” angle with the same claim: https://www.windowscentral.com/artificial-intelligence/openai-chatgpt/sam-altman-ai-agents-hackers-best-friend
But he is not actually claiming that they already have this technology but rather that they’re working towards it. He even calls ChatGPT dumb there.
and ChatGPT (which Altman referred to as “incredibly dumb” compared with what’s coming next)
AI,
Encryption with safe, unexploitable backdoors.
https://en.wikipedia.org/wiki/One-time_pad
The one-time pad (OTP) is an encryption technique that cannot be cracked in cryptography. It requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent.
Printers.
I think there is an open source printer being created. Potentially has the chance at being the only printer that isn’t a pile of shit.
I’ve seen that project. Complete radio silence since the announcement and zero path to releasing anything.
It really sucks.
I really liked my old brother laser printer.
Keyword in that is ‘old’ - new printers are shit, and Brother printers have been pretty bad in my experience as well. Most any new printers I’ve touched is just terrible.
But then again, I stan my old HP color LJ that I got for free from the early 2010s that I got when my employer went to a printer contract service and just dumped all the printers they currently had. That fucker runs like a champ and has let me put in after market toner carts without much complaint and 0 printing issues. Modern HP printers only belong on fire.
I have a black and white laser printer — a Brother, FWIW — that works great. It sits there and when I print the occasional document, flips on and quietly and quickly does its thing. I remember printers in past decades. Paper jams. Continuous-tractor feed paper having the tractor feeds rip free in the printer. Slow printing. Loud printing. Prints that smeared. Clogging ink nozzles on inkjets.
It replaced a previous Apple black-and-white laser printer from…probably the early 1990s that I initially got used which also worked fine and worked until the day I threw it out — I just wanted more resolution, which current laser printers could do.
The only thing that I can really beat the Brother up for is maybe that, like many laser printers, to cut costs on the power supply, it has a huge power spike in what it consumes when it initially comes on; I’d rather just pay for a better power supply. But it’s not enough for me to care that much about it, and if I really want to, I can plug it into power regulation hardware.
It’s not a photo printer, and so if someone wants to print photos, I can appreciate that a laser printer isn’t ideal for that, but…I also never print photos, and if I did at some point, I’d probably just hit a print shop.
Too true :(
The big one would be viable nuclear fusion, we’ve been trying to figure it out and spending money on it for like 80 years now.
That being said, there’s actually a lot of verified progress on it lately by reputable organizations and international teams.
As far as i know they can get it working in small scale, in labs
It’s only 30 years away!
Just like it was 30 years ago.
Ah, the so called Fusion Constant.
deleted by creator
Don’t believe the ads. They are just trying to hype something that is too early for its time.
VR tech can, and it will, revolutionise gaming. It’s just a question of when. Headsets are too heavy and require wires which impedes movement. VR glasses are developed by big tech and are perfect for privacy invasions, plus their batteries don’t last anywhere near long enough.
Smartphones had already been invented nearly a decade before iPhones came out, but they were too early. The tech wasn’t ready. Look at where they are now. Solar panels were invented nearly a century ago but didn’t take off until then entire supply chain and manufacturing chain was built nearly 80 years later. The friggin helicopter was invented centuries ago and so were planes. You will find countless other examples.
Right now, we’re trying to make it possible to conceive children without any sex and to grow them in external wombs. This has been a quest for decades and we might not see it bear fruit for a few decades more.
Just because they have failed so far doesn’t mean they will always be failures. Every failure narrows the problem space to points of success.
The flying car, AI, cold fusion, anti-aging.
The flying car,
Those are called helicopters. They’re literally just cars but every advantage and every downside is amplified.
They’re amazing for taking a small number of people somewhere, at massive cost to the surroundings. They’re noisy, take up a lot of space, require lots of specialized Infrastructure just for them and they are incredibly dangerous to their surroundings.
cold fusion
That’s not a technology, it’s a scam. Regular fusion is absolutely real, it’s just super complicated and hugely underfunded.
No, a helicopter is a flying vehicle that can’t drive on city streets. A flying car is a street legal vehicle that can take off and land like a plane. https://youtu.be/a2tDOYkFCYo for an idea of what I’m talking about.
Doesn’t matter if cold fusion is a scam or not. People keep trying to make it work which fits OP’s question.
Nobody but quacks is trying to make cold fusion work. Are you confusing it with “regular” nuclear fusion?
Flying cars. The idea has intuitive appeal — just drive like normal, but most congestion problems go away!
https://en.wikipedia.org/wiki/Flying_car
We’ve made them, but the tradeoffs that you have to make to get a good road vehicle that is also a good aircraft are very large. The benefits of having a dual-mode vehicle are comparatively limited. I think that absent some kind of dramatic technological revolution, like, I don’t know, making the things out of nanites, we’ll just always be better off with dedicated vehicles of the first sort or the second.
Maybe we could have call-on-demand aircraft that could air-ferry ground vehicles, but I think that with something on the order of current technology, that’s probably as close as we’ll get.
Flying cars lose al appeal the moment you encounter other drivers on the road. Just imagine that, but flying.
I don’t think any government will ever allow flying cars.
Too prone for accidents, and way too much freedom.There are many models of flying cars. They usually are bad cars and bad planes and very expensive. Only good for niche wealthy enthusiasts.
Holographic / Crystalline storage.
Cinema movies in 3D with the stupid glasses.
Home printers














