For me, this is the most depressing part.
So many of the technically competent individuals I know are just gleefully throwing their competency and credibility into this ‘AI’ grift chipper with utter abandon.
I can’t see a viable path through it, and whenever they’ve articulated what they see on the other side, it is beyond repugnant and I truly don’t see any benefit in existing in that world if it ever manifests.
We’re cooked. Gotta fight back
That’s actually a pretty good idea. What if we started putting tech billionaires through the wood chipper? It could be like the American guillotine
There’s is nothing too awful to happen to sundar Prichai.
No horrible fate could befall that man that would not cause me delight.
Do they make a 2-stroke wood chipper? The planet can dank the damage and it’ll get the chuds on board /s
What do you call a private jet full of billionaires crashing into a mountain?
A good start.
Not if the people trying to force it on us go in to the chipper first.
I think we can all agree, we should put people like this in the wood chipper and unboubtedly the world would be a better place.
The billionaires want to build industrial wood-chippers to get rid of us en masse, meanwhile individual sized wood-chippers already exist.
Bingo
https://www.machinerypartner.com/heavy-equipment-categories/shredders/lt/car-shredder-for-sale
Sounds like on of these.
I’ve got about $100 I can afford to fund this endeavor.
I don’t see a stat for how many billionaires it can shred per minute. Any idea?
Agree. AGREE?!
Well, firstly, I want to know are you a namby pamby “make it quick, put them head first in to the woodchipper” person.
Or a steely eyed “make the parasites suffer and go feet first and make them watch it happen” type.
If we are going to agree on anything then we need to agree on these fundamentals!!I am for head first.
There is no benefit for other’s suffering. Just make it quick.That’s one thing I really liked about the first John Wick movie. No drawn out revenge torture, just a quick bullet and walk out. No need to take some kind of sick pleasure in it, there’s a job to be done.
Now that’s not to say I won’t enjoy some feel-good thoughts about the whole thing, but I just don’t take pleasure in making a death slow and painful.
Dick first
Head first, but the line up to the top has a really good view.
As long as the tech Bros and the CEOs suffer first and the most I’m okay with it.
This type of stuff is exactly why I am moving all of my accounts away from Google. Google is now as bad or evenworse than Microsoft.
To where?
For me - self-hosted.
Where is Alphabet at on creating robot infantry? Not sure if ads are going to save them from societal woodchipering:
Google sold Boston Dynamics to Hyundai a few years ago. I wonder if they regret it…
I’ll suffer if he goes through the woodchipper
2 steps ahead of you, boss. Locked and loaded ready for the Terminators already.
I’d love for him to have to suffer through… everything, really, if I’m being honest, given what he’s helping to do to humanity.
I’m just giddy with excitement. /s
Nope. Not all
deleted by creator
They don’t have the capabillity to “admit” to anything.
You are falling into the same trap as the guy who had his development project deleted by an AI despite having had it “promise” not to do that.
The AI we use today don’t have the understanding of “admitting” or “promising”, to them, these are just words, with no underlying concept.
Please stop treating AI’s as if they are human, they are absolutely not.
It’s the same trap that execs fall into when thinking they can replace humans with AI
Gen AI doesn’t “think” for itself. All the models do is answer “given the text X in front of me what’s the most probable response to spit out”. It has no concept of memory or anything like that. Even chat convos are a bit of a hack as all that happens is that all the text in the convo up until that point is thrown in as X. It’s why chat window limits exist in LLMs, because there’s a limit of how much text they can throw in for X before the model shits itself.
That doesn’t mean they can’t be useful. They are semi decent at taking human input and translating it to programmatic calls (which up until that point you had to work with clunky NLP libraries to do). They are also okay at summarizing info.
But the chat bot and garbage hype around them has people convinced that these are things they’re not. And every company is starting to learn the hard way there’s hard limits to what these things can do.
People really need to understand that it’s just very complex predictive text amounting to a Rorschach test.
I know this comes from a good place, but you are misunderstanding how LLMs work at a fundamental level. The LLMs “admitted” to those things in the same way that parrots speak English. LLMs aren’t self-aware and do not understand their own implementation or purpose. They just spit out a statistically reasonable series of words from their dataset. You could just as easily get LLMs to admit they are an alien, the flying spaghetti monster, or the second coming of Jesus.
Realistically, engaging with these LLMs directly in any way is not really a good idea. It wastes resources, shows engagement with the app, and gives it more training data.
It sounds like you’ve fallen for the marketing, and believe the chatbots are alive. Chatbots are not alive. They don’t “confess” or “admit” or “lie” or “hide”. It’s a text generator. Please spend more time with your friends and stop interacting with the chatbots.
deleted by creator
Calm down, now you are talking to yourself.








