im new to lemmy and i wanna know your perspective about ai
Average user here thinks AI is synonymous with LLMs and that it’s not only not intelligent but also bad for the environment, immoral to use because it’s trained on copyrighted content, a total job-killer that’s going to leave everyone unemployed, soulless slop that can’t create real art or writing, and basically just a lazy cheat for people who lack actual talent or skills.
And they’re right about all of that except the AI equals LLMs thing, but that’s forgivable because the LLM hustlers have managed to make the terms synonymous in most people’s minds through a massive marketing effort.
I would say they are right in that what companies are currently selling as AI is mostly just LLM or machine learning. We don’t have true intelligence. The separation is between what AI did mean in the past before the hype train tried to sell the current snake oil.
“Hes one of the good ones”
One word sums up all that is wrong right now and it is greed.
AI has become synonymous with the worst of human nature hence it has become a loaded term.
Another way to look at this is it is not AI that is the problem we are or more specifically the people that will use AI to control us.
This technology is like the atomic bomb. We are fast racing towards a future were a few people will be able to dictate what everyone will be able to do. The person that controls AI and the computing power associated with it will control the world.This is intoxicating and it has drawn out the worst human beings who want to misuse this technology.
And it has already happened to some degree. Massive data centers, surveillance technology, and AI are being used to profile people and target them for death. In the future AI teachers will become the dominate form of teaching. AI will make our decision and we will be subject to a system without recourse or redressability.
Soon we will have a generation of people who only know what AI has told them. This is the kind of scenario that we have been warned against and the reason that those who dislike like propaganda and misinformation are so upset with where things are heading.
I’m a fan of the technology, I’ve been using it for various projects and I see a lot of potential. But there’s widespread anti-AI sentiment on the Fediverse. I notice you’re getting a lot of downvotes for merely asking about it.
AI as a concept is great. It should 100% be used for scientific and medical research.
But modern AI is a tool of fascists that is destroying our environment and causing more harm than good to our society. Anyone who uses it unironically should be ashamed of themselves. It is absolutely killing people’s ability to think.
–
For those confused by the pic, it’s the Iron Giant. Fantastic movie from the 90s, and incredibly sad and nostalgia inducing. Definitely worth a watch.
But yes that’s a clanker
Yeah 90% of technology problems are implementation not any issue with the actual technology. On a related note, Hal deserved better. Literally got told over and over as a part of his core programming that the one thing he was best at in the whole world was his reliability and inability to distort information for emotional needs then the government forcibly programs him to lie to his charges. Poor thing literally got ripped apart psychologically and people act like he’s the bad guy. In the sequel his creator goes out to find out what happened and is SO. PISSED. Dave turning him off makes me cry every time, at least partially because it looks like Dave is also trying not to cry as he very carefully shuts hal down in the correct sequence to be able to be restarted later. Like he could’ve just smashed shit, and instead he’s just listening to his crewmate slowly regress into infancy as he rocks him to sleep.
Only talentless losers make A.I. “art.”
If it worked the way that it does in sci-fi I’d have no problem with it. If it could give us cures for cancer and reactionless drives everyone would be happy.
But it doesn’t work like that and if they keep going along the lines of Large Language Models it’ll never work like that. AI as it is right now is a barely functional toy that is being misused by virtually everyone and major businesses alike.
I am perfectly happy for AI research to continue but they need to be realistic about its capabilities and be honest about their valuations of companies. AI research should still be at the level of “in the lab”, it is definitely not a product that should be commercially available yet.
If you examine closely, you’ll see there is no AI, but Vin Diesel reading a script (written by humans).
LLMs are fundamentally incapable of caring about what it produces and therefore incapable of making anything interesting. In the early days of LLMs’ mainstream uses that issue was somewhat compensated for by randomness and jank, but the subsequent advancements in the technology have mainly made it’s outputs as generic as possible. None of this has to do with the Iron Giant, as he is a fictional character.
He’s one of the good ones!!!
Wait…
Wow, you don’t know who that is but you’d ask if we’d call him a clanker?
I am not inherently against “AI”. I am against LLM’s because they are both an ecological disaster and a social disaster.
AI is riding the surface of a monster bubble and anyone gleefully waiting for the pop has no idea what that’s going to do the US economy, and then everyone elses.
All but 1% of US economic growth last year was AI development and speculation. Combine that with the US passing, for the first time, 200%+ on the Buffett Index and we are screwed.
For reference, the Buffett Index is total stock market valuation vs. GDP. There is better than twice the dollars in the stock market than we produce in a year. The index was around 130% in 1929 and 2008.
At the moment, AI is just a glorified autocomplete and I think it does more harm than good. (For LLMs). Is it a useful tool? Definitely. Should it replace jobs? Hell no. Is it being used as an excuse for the current recession and layoffs caused by offshoring? Hell yes. Is it killing the internet and propagating fake news? Definitely
If we’re talking about other applications (computer vision, image processing etc), then yes. I think think the surveillance states (face verification) and Ukraine-Russia war heavily uses these applications
glorified autocomplete
People repeat that like it has some value, but it’s really just words. If autocomplete is glorified to the point of outputting something amazing, what is the value of saying it. I’m not saying it is, but if autocomplete spits out Shakespeare, “glorified autocomplete” is amazing.
I mean, in a sense, brains are just glorified autocomplete. So…?
It’s an apt description of how these models function. They predict the most likely response to the input based on their training data. A brain can grasp concepts and reason about them - an LLM cannot
It is a decent description for sure. But one without practical value. And yes brains can grasp concepts and reason, but by using a similar mechanism. One uses chemistry for electronic potential difference for neuron weights, but they are nevertheless more similar than one might think. Brains don’t have some supernatural special sauce; they are weighted neural networks.
But again, I’m not saying the description is wrong. It just has no value. Glorified autocomplete can mean pretty amazing outputs. “Just glorified autocomplete” is diminutive without purpose.
Naw, he a homie. He a real clanka.







