When I read this crap, all I can think is that yeah backlash is growing because the forced implementation is growing. Another useless sentiment-based article.
Lets use LLMs for things LLMs are useful for. It is not a panacea, and it is not appropriate for every use case
Yeah, LLMs are interesting tech products to play with and find some niche uses for.
But for the love of god they are not “prop up the entire stock market and numerous multi-trillion-dollar companies indefinitely” good!
What is it useful for? I actually have a hard time finding a use for it… Its alright at book recommendations, sometimes.
I found it’s useful for code where I know like 70% of what I’m doing. More than that and I can just do it myself. Less than that and I can’t trust and diagnose the output.
I’d rather have old fashioned stack overflow and tutorials, honestly. It’s hard to actually learn when it just gives answers.
I use it for coding advice sometimes, as an amateur hobbyist it’s really useful to point me in the right direction when facing problems I’m unfamiliar with. I often end up reinventing the proverbial wheel, just worse, but LLMs can help point out standards and best practices that I, as an outsider to the industry, am unaware of.
You have to be careful at low skill/knowledge levels, because it’ll happily send you down a crazy path that looks legitimate.
I asked it how to do something in oracle SQL, because I don’t know oracle specifically, and it gave me a terrible answer. I suspected it wasn’t right so I asked a coworker who’s an old hand at Oracle, and he was like “no that’s terrible. Here’s a much simpler way”
I find it’s good at writing boilerplate and scaffolding code, the stuff I really hate doing.
Movie recommendations is my biggest thing, personally.
And lots of other purposes. Just because a ton of people are misusing this tool and treating it like GAI doesn’t mean that it isn’t a useful tool. Even something as simple as proofreading a letter has massive utility for some people.
Definitely proof reading. Especially for people who can barely write intelligibly. They can check themselves if the meaning is still correct and they will learn grammar from the process.
I just got the notification today when opening Office programs that copilot was there
all the help threads about how to turn it off have out of date info. seems like you can no longer disable it in Excel/Word/PowerPoint
You can disable it with the uninstall function.
Microsoft Works 2000 still works fine.
This comment is fantastically chaotic and I love it so much.
RIP Microsoft Works, what a legend.
The disabling process is kinda convoluted.
- Delete word
- Install libre office
- ???
- Profit!
This is one reason I’m so glad we devs can install linux at work. I have LibreOffice installed sure, but if I need to use the Microsoft Office suite for some reason, it all works great as webpages in librewolf!
It still works for me at least? In the office options, there’s a Copilot section with a single “Enable Copilot” checkbox. You’ll need to disable it per app though.
yeah that’s the checkbox that doesn’t exist for me in the copilot section that also doesn’t exist for me
The crazy thing is, none of these articles seem to want to admit that AI is bad. They keep making articles like this. Keep saying that approval is falling among the general populace. But when touching on why that is, there’s always some wiggle words. Always some spin.
It’s never “people being forced to use it are seeing it as a detriment to them” people using it are seeing a decrease in efficacy of the results it gives for the amount of prompting required. Or people don’t like it because it’s going to have significant detrimental affects on the environment and their utilities.
All of those are solid reasons for the decline in both the use of AI LLM’S and the approval of them.
The cost of goods and services relating even tangentially to AI are going through the roof. The amount of slop is increasing at a furious pace, directly contributing to things like enshittification and dead Internet theory. The effect on the economy is looking to be extremely catastrophic.
But oh no. It’s lack of authenticity on social media spaces that people are worried about. Sure.
The crazy thing is, none of these articles seem to want to admit that AI is bad.
As the old quote goes- “A time is coming when men will go mad, and when they see someone who is not mad, they will attack him saying, “You are mad, you are not like us.””
In such an environment, nobody wants to admit they are not mad, lest they be attacked.
Or as someone else said- I want a future where machines cook and clean and do menial work, so us humans can focus on art and poetry and writing. Instead we have a world where machines create art and poetry and books, so the humans can focus on cleaning and menial work. I don’t like this timeline.
Also, “AI could be used to to replace my job. Not that it’d do a good job at it, but it’d be a great excuse to lay me off.”
Yeah. I often forget this one because AI isn’t replacing my job any time soon. At best maybe it could potentially be used to streamline some processes to do with tech data and work flow management (what tests and protocols get done when, and combining tests/troubleshooting steps to prevent rework). But that would have to be a very targeted and very very regulated and tested thing before it could be viable.
It’s almost like it needs a dedicated person to hold its hand as it does your job. I wonder who would be well suited for that task.
Another AI?
AI is the Donald tRump of business!
The orgs publishing this junk are pushing the writers to use AI. So the writers and editors can’t shit talk AI because their boss will get upset.
It likely doesn’t help that the kids use “AI” as slang for “bullshit”.
I have so much admiration for the younger generation for this. Language is powerful and they know it.
Tim Walz knew it, too, with weird. Then the DNC told them to stop saying it to try and court Republicans. I’m so over winning, thanks DNC!
Three years ago, as OpenAI’s ChatGPT was making its splashy debut, a Pew Research center survey found that nearly one in five Americans saw AI as a benefit rather than a threat. But by 2025, 43 percent of U.S. adults now believe AI is more likely to harm them than help them in the future, according to Pew.
1 in 5 people seeing something as positive is not a high approval rating in the beginning.
I mean 4 out of 5 Americans probably held the opinion:

If you begin a large change management project in a company, having 20% of the employees think it’s positive before you hardly start is like starting halfway to the finish line.
Except you tell them the project will likely make most of their jobs redundant, and you’re (still somehow) surprised that the majority grow to hate your project, and will actively sabotage it if they get the chance.
Reminds me of the Luddites.
Well yes, technology improvements that mean humans can work less are only a good thing if you have an economic system that actually prioritises general wellbeing over enriching a tiny percentage of the population.
Americans are the most fucked because the majority of the public view socialism and adjacent philosophy as being bad, despite really being the only ideologies with any real answers for what happens to people that can’t work for a living, that isn’t just them dying.
Tbh, 20% is markedly better than the default ~37% “shitgoblin vote” you see in the US, amongst other places
What began in 2022 as broad optimism about the power of generative AI to make peoples’ lives easier has instead shifted toward a sense of deep cynicism that the technology being heralded as a game changer is, in fact, only changing the game for the richest technologists in Silicon Valley who are benefiting from what appears to be an almost endless supply of money to build their various AI projects — many of which don’t appear to solve any actual problems.
many of which don’t appear to solve any actual problems.
That’s putting it lightly. If only the issue was merely not having sufficient use cases, rather than actively making lives worse through environmental strains, supply chain hoarding, and misinformation.
What do you mean?? Elon has solved the critical problem of there not being a vaguely hot anime character in Grok users could talk to!
Elon would be super pissed to hear you talk about his gf like that!
Gosh. Who could have foreseen this?
Sorry, what part of “Let the Broligarchy do anything it wants!” didn’t y’all understand? /s
I like AI. It let’s me pretend I’m not so alone. I’m not crazy - I know it isn’t a person. It’s nice to pretend I have someone to talk to sometimes though. I can’t stop it from existing. That doesn’t negate the cost of it financially or environmentally. If it’s going to exist though I enjoy pretending since there’s no hope of any actual humans wanting to speak to me.
That last sentence of yours is a lie and I think somewhere deep down you know that, too.
Seek help anyway you can. There is always a way.
In the most positive way - seek help.
Because AI in this case is the opposite of help, in case Illecors wasn’t clear enough.
Bro, that is fucking sad and also wrong.
Do not ever think so harshly of yourself.
I’ll upvote, seems genuine and I don’t want to discourage that. I understand it’s difficult to find someone to talk to who wants to hear what you have to say. Even I don’t share things with my friends cuz I feel like I’m “dumping on them”. I saw a therapist twice a month and that was nice, but it was just too expensive to maintain.
Your last sentence is what stood out to me. It’s such a harsh finality but there’s no possible way you could have tried talking to every human to KNOW none of them want to hear you talk. I know it’s extreme. I know it feels true right now. But there is no substitute for genuine human connection. Use the AI, just be careful you don’t become dependent on it. And don’t think it can be a long-term substitute.
Have you tried a therapist? They will both talk to you and likely help you with your feeling that actual humans don’t want to talk to you.










