- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
cross-posted from: https://lemmy.dbzer0.com/post/43566349
deleted by creator
Humans are irrational creatures that have transitory states where they are capable of more ordered thought. It is our mistake to reach a conclusion that humans are rational actors while we marvel daily at the irrationality of others and remain blind to our own.
Precisely. We like to think of ourselves as rational but we’re the opposite. Then we rationalize things afterwards. Even being keenly aware of this doesn’t stop it in the slightest.
Probably because stopping to self analyze your decisions is a lot less effective than just running away from that lion over there.
Bottom line: Lunatics gonna be lunatics, with AI or not.
Yep.
And after enough people can no longer actually critically think, well, now this shitty AI tech does actually win the Turing Test more broadly.
Why try to clear the bar when you can just lower it instead?
… Is it fair, at this point, to legitimately refer to humans that are massively dependant on AI for basic things… can we just call them NPCs?
I am still amazed that no one knows how to get anywhere around… you know, the town or city they grew up in? Nobody can navigate without some kind of map app anymore.
deleted by creator
Dehumanization is happening often and fast enough without acting like ignorant, uneducated, and/or stupid people aren’t “real” people.
I get it, some people seem to live their whole lives on autopilot, just believing whatever the people around them believe and doing what they’re told, but that doesn’t make them any less human than anybody else.
Don’t let the fascists win by pretending they’re not people.
deleted by creator
“Unalive” is an unnecessary euphemism here. Please just say kill.
deleted by creator
Haha I grew up before smartphones and GPS navigation was a thing, and I never could navigate well even with a map!
GPS has actually been a godsend for me to learn to navigate my own city way better. Because I learn better routes in first try.Navigating is probably my weakest “skill” and is the joke of the family. If I have to go somewhere and it’s 30km, the joke is it’s 60km for me, because I always take “the long route”.
But with GPS I’ve actually become better at it, even without using the GPS.
I don’t know if it’s necessarily a problem with AI, more of a problem with humans in general.
Hearing ONLY validation and encouragement without pushback regardless of how stupid a person’s thinking might be is most likely what creates these issues in my very uneducated mind. It forms a toxically positive echo-chamber.
The same way hearing ONLY criticism and expecting perfection 100% of the time regardless of a person’s capabilities or interests created depression, anxiety, and suicidal ideation and attempts specifically for me. But I’m learning I’m not the only one with these experiences and the one thing in common is zero validation from caregivers.
I’d be ok with AI if it could be balanced and actually pushback on batshit crazy thinking instead of encouraging it while also able to validate common sense and critical thinking. Right now it’s just completely toxic for lonely humans to interact with based on my personal experience. If I wasn’t in recovery, I would have believed that AI was all I needed to make my life better because I was (and still am) in a very messed up state of mind from my caregivers, trauma, and addiction.
I’m in my 40s, so I can’t imagine younger generations being able to pull away from using it constantly if they’re constantly being validated while at the same time enduring generational trauma at the very least from their caregivers.
deleted by creator
TBF, that should be the conclusion in all contexts where “AI” are cconcerned.
I read the article. This is exactly what happened when my best friend got schizophrenia. I think the people affected by this were probably already prone to psychosis/on the verge of becoming schizophrenic, and that ChatGPT is merely the mechanism by which their psychosis manifested. If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis. But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.
ChatGPT actively screwing with mentally ill people is a huge problem you can’t just blame on stupidity like some people in these comments are. This is exploitation of a vulnerable group of people whose brains lack the mechanisms to defend against this stuff. They can’t help it. That’s what psychosis is. This is awful.
the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.
So do astrology and conspiracy theory groups on forums and other forms of social media, the main difference is whether you’re getting that validation from humans or a machine. To me, that’s a pretty unhelpful distinction, and we attack both problems the same way: early detection and treatment.
Maybe computers can help with the early detection part. They certainly can’t do much worse than what’s currently happening.
I think having that kind of validation at your fingertips, whenever you want, is worse. At least people, even people deep in the claws of a conspiracy, can disagree with each other. At least they know what they are saying. The AI always says what the user wants to hear and expects to hear. Though I can see how that distinction may matter little to some, I just think ChatGPT has advantages that are worse than what a forum could do.
Sure. But on the flip side, you can ask it the opposite question (tell me the issues with <belief>) and it’ll do that as well, and you’re not going to get that from a conspiracy theory forum.
I don’t have personal experience with people suffering psychoses but I would think that, if you have the werewithal to ask questions about the opposite beliefs, you’d be noticeably less likely to get suckered into scams and conspiracies.
Sure, but at least the option is there.
I think this is largely people seeking confirmation their delusions are real, and wherever they find it is what they’re going to attach to themselves.
If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis.
Or hearing the Beatles White Album and believing it tells you that a race war is coming and you should work to spark it off, then hide in the desert for a time only to return at the right moment to save the day and take over LA. That one caused several murders.
But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.
If you’re sufficiently detached from reality, nearly anything validates the psychosis.
Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.
For shit’s sake, it’s a computer. No matter how sentient the glorified chatbot being sold as “AI” appears to be, it’s essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it’s not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.
If a computer starts talking to you as though you’re some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.
deleted by creator
So it’s essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?
deleted by creator
The time will come when we look back fondly on “organic” conspiracy nuts.
human-level? Have these people used chat GPT?
deleted by creator
Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.
I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN’T REALLY LOVE YOU! THAT’S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!
I know it’s not the perfect analogy, but… eh, close enough, right?
a bear minimum.
I always felt that was too much of a burden to put on people, carrying multiple bears everywhere they go to meet bear minimums.
/facepalm
The worst part is I know I looked at that earlier and was just like, “yup, no problems here” and just went along with my day, like I’m in the Trump administration or something
I chuckled… it happens! And it blessed us with this funny exchange.
For real. I explicitly append “give me the actual objective truth, regardless of how you think it will make me feel” to my prompts and it still tries to somehow butter me up to be some kind of genius for asking those particular questions or whatnot. Luckily I’ve never suffered from good self esteem in my entire life, so those tricks don’t work on me :p
How do we know you’re not an AI bot?
The article talks of ChatGPT “inducing” this psychotic/schizoid behavior.
ChatGPT can’t do any such thing. It can’t change your personality organization. Those people were already there, at risk, masking high enough to get by until they could find their personal Messiahs.
It’s very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.
This is just another area where society is not designed to properly account for or serve people with “cluster” disorders.
I mean, I think ChatGPT can “induce” such schizoid behavior in the same way a strobe light can “induce” seizures. Neither machine is twisting its mustache while hatching its dastardly plan, they’re dead machines that produce stimuli that aren’t healthy for certain people.
Thinking back to college psychology class and reading about horrendously unethical studies that definitely wouldn’t fly today. Well here’s one. Let’s issue every anglophone a sniveling yes man and see what happens.
I think OpenAI’s recent sycophant issue has cause a new spike in these stories. One thing I noticed was these observations from these models running on my PC saying it’s rare for a person to think and do things that I do.
The problem is that this is a model running on my GPU. It has never talked to another person. I hate insincere compliments let alone overt flattery, so I was annoyed, but it did make me think that this kind of talk would be crack for a conspiracy nut or mentally unwell people. It’s a whole risk area I hadn’t been aware of.
Humans are always looking for a god in a machine, or a bush, in a cave, in the sky, in a tree… the ability to rationalize and see through difficult to explain situations has never been a human strong point.
I’ve found god in many a bush.
the ability to rationalize and see through difficult to explain situations has never been a human strong point.
you may be misusing the word, rationalizing is the problem here
saying it’s rare for a person to think and do things that I do.
probably one of the most common flattery I see. I’ve tried lots of models, on device and larger cloud ones. It happens during normal conversation, technical conversation, roleplay, general testing… you name it.
Though it makes me think… these models are trained on like internet text and whatever, none of which really show that most people think quite a lot privately and when they feel like they can talk
This happened to a close friend of mine. He was already on the edge, with some weird opinions and beliefs… but he was talking with real people who could push back.
When he switched to spending basically every waking moment with an AI that could reinforce and iterate on his bizarre beliefs 24/7, he went completely off the deep end, fast and hard. We even had him briefly hospitalized and they shrugged, basically saying “nothing chemically wrong here, dude’s just weird.”
He and his chatbot are building a whole parallel universe, and we can’t get reality inside it.
This seems like an extension of social media and the internet. Weird people who talked at the bar or in the street corner were not taken seriously and didn’t get followers and lots of people who agree with them. They were isolated in their thoughts. Then social media made that possible with little work. These people were a group and could reinforce their beliefs. Now these chatbots and stuff let them liv in a fantasy world.
I think that people give shows like the walking dead too much shit for having dumb characters when people in real life are far stupider
Like farmers who refuse to let the government plant shelter belts to preserve our top soil all because they don’t want to take a 5% hit on their yields… So instead we’re going to deplete our top soil in 50 years and future generations will be completely fucked because creating 1 inch of top soil takes 500 years.
Even if the soil is preserved, we’ve been mining the micronutrients from it and generally only replacing the 3 main macros for centuries. It’s one of the reasons why mass produced produce doesn’t taste as good as home grown or wild food. Nutritional value keeps going down because each time food is harvested and shipped away to be consumed and then shat out into a septic tank or waste processing facility, it doesn’t end up back in the soil as a part of nutrient cycles like it did when everything was wilder. Similar story for meat eating nutrients in a pasture.
Insects did contribute to the cycle, since they still shit and die everywhere, but their numbers are dropping rapidly, too.
At some point, I think we’re going to have to mine the sea floor for nutrients and ship that to farms for any food to be more nutritious than junk food. Salmon farms set up in ways that block wild salmon from making it back inland doesn’t help balance out all of the nutrients that get washed out to sea all the time, too.
It’s like humanity is specifically trying to speedrun extiction by ignoring and taking for granted how things work that we depend on.
Why would good nutrients end up in poop?
It makes sense that growing a whole plant takes a lot of different things from the soil, and coating the area with a basic fertilizer that may or may not get washed away with the next rain doesn’t replenish all of what is taken makes sense.
But how would adding human poop to the soil help replenish things that humans need out of food?
We don’t absorb everything completely, so some passes through unabsorbed. Some are passed via bile or mucous production, like manganese, copper, and zinc. Others are passed via urine. Some are passed via sweat. Selenium, when experiencing selenium toxicity, will even pass through your breath.
Other than the last one, most of those eventually end up going down the drain, either in the toilet, down the shower drain, or when we do our laundry. Though some portion ends up as dust.
And to be thorough, there’s also bleeding as a pathway to losing nutrients, as well as injuries (or surgeries) involving losing flesh, tears, spit/boogers, hair loss, lactation, finger nail and skin loss, reproductive fluids, blistering, and mensturation. And corpse disposal, though the amount of nutrients we shed throughout our lives dwarfs what’s left at the end.
I think each one of those are ones that, due to our way of life and how it’s changed since our hunter gatherer days, less of it ends up back in the nutrient cycle.
But I was mistaken to put the emphasis on shit and it was an interesting dive to understand that better. Thanks for challenging that :)
Thank you for taking it in good faith and for writing up a researched response, bravo to you!
Covid gave me an extremely different perspective on the zombie apocalypse. They’re going to have zombie immunization parties where everyone gets the virus.
People will protest shooting the zombies as well
In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, “which we know to be quite effective at helping people reframe their stories.” Critically, though, AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like,” she says. “A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”
This is a rather terrifying take. Particularly when combined with the earlier passage about the man who claimed that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler.” Therapists have to be very careful because human memory is very plastic. It’s very easy to alter a memory, in fact, every time you remember something, you alter it just a little bit. Under questioning by an authority figure, such as a therapist or a policeman if you were a witness to a crime, these alterations can be dramatic. This was a really big problem in the '80s and '90s.
Kaitlin Luna: Can you take us back to the early 1990s and you talk about the memory wars, so what was that time like and what was happening?
Elizabeth Loftus: Oh gee, well in the 1990s and even in maybe the late 80s we began to see an altogether more extreme kind of memory problem. Some patients were going into therapy maybe they had anxiety, or maybe they had an eating disorder, maybe they were depressed, and they would end up with a therapist who said something like well many people I’ve seen with your symptoms were sexually abused as a child. And they would begin these activities that would lead these patients to start to think they remembered years of brutalization that they had allegedly banished into the unconscious until this therapy made them aware of it. And in many instances these people sued their parents or got their former neighbors or doctors or teachers whatever prosecuted based on these claims of repressed memory. So the wars were really about whether people can take years of brutalization, banish it into the unconscious, be completely unaware that these things happen and then reliably recover all this information later, and that was what was so controversial and disputed.
Kaitlin Luna: And your work essentially refuted that, that it’s not necessarily possible or maybe brought up to light that this isn’t so.
Elizabeth Loftus: My work actually provided an alternative explanation. Where could these merit reports be coming from if this didn’t happen? So my work showed that you could plant very rich, detailed false memories in the minds of people. It didn’t mean that repressed memories did not exist, and repressed memories could still exist and false memories could still exist. But there really wasn’t any strong credible scientific support for this idea of massive repression, and yet so many families were destroyed by this, what I would say unsupported, claim.
The idea that ChatBots are not only capable of this, but that they are currently manipulating people into believing they have recovered repressed memories of brutalization is actually at least as terrifying to me as it convincing people that they are holy prophets.
Edited for clarity
GPT4o was a little too supportive… I think they took it down already
Yikes!
4o, in its current version, is a fucking sycophant. For me, it’s annoying. For the person from that screenshot, its dangerous.
JFC.
Meanwhile for centuries we’ve had religion but that’s a fine delusion for people to have according to the majority of the population.
Came here to find this. It’s the definition of religion. Nothing new here.
Right, immediately made me think of TempleOS, where were the articles then claiming people are losing loved ones to programming fueled spiritual fantasies.
Cult. Religion. What’s the difference?
Is the leader alive or not? Alive is likely a cult, dead is usually religion.
The next question is how isolated from friends and family or society at large are the members. More isolated is more likely to be a cult.
Other than that, there’s not much difference.
The usual setup is a cult is formed and then the second or third leader opens things up a bit and transitions it into just another religion… But sometimes a cult can be born from a religion as a small group breaks off to follow a charismatic leader.
I have kind of arrived to the same conclusion. If people asked me what is love, I would say it is a religion.
The existence of religion in our society basically means that we can’t go anywhere but up with AI.
Just the fact that we still have outfits forced on people or putting hands on religious texts as some sort of indicator of truthfulness is so ridiculous that any alternative sounds less silly.
Not trying to speak like a prepper or anythingz but this is real.
One of neighbor’s children just committed suicide because their chatbot boyfriend said something negative. Another in my community a few years ago did something similar.
Something needs to be done.
Like what, some kind of parenting?
This happened less than a year ago. Doubt regulators have done much since then https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0
This is the Daenerys case, for some reason it seems to be suddenly making the rounds again. Most of the news articles I’ve seen about it leave out a bunch of significant details so that it ends up sounding more of an “ooh, scary AI!” Story (baits clicks better) rather than a “parents not paying attention to their disturbed kid’s cries for help and instead leaving loaded weapons lying around” story (as old as time, at least in America).
Not only in America.
I loved GOT, I think Daenerys is a beautiful name, but still, there’s something about parents naming their kids after movie characters. In my youth, Kevin’s started to pop up everywhere (yep, that’s how old I am). They weren’t suicidal but behaved incredibly badly so you could constantly hear their mothers screeching after them.
Daenerys was the chatbot, not the kid.
I wish I could remember who it was that said that kids’ names tend to reflect “the father’s family tree, or the mother’s taste in fiction,” though. (My parents were of the father’s-family-tree persuasion.)
Thanks for clarifying!
But Fuckerburg said we need AI friends.
I admit I only read a third of the article.
But IMO nothing in that is special to AI, in my life I’ve met many people with similar symptoms, thinking they are Jesus, or thinking computers work by some mysterious power they posses, but was stolen from them by the CIA. And when they die all computers will stop working! Reading the conversation the wife had with him, it sounds EXACTLY like these types of people!
Even the part about finding “the truth” I’ve heard before, they don’t know what it is the truth of, but they’ll know when they find it?
I’m not a psychiatrist, but from what I gather it’s probably Schizophrenia of some form.My guess is this person had a distorted view of reality he couldn’t make sense of. He then tried to get help from the AI, and he built a world view completely removed from reality with it.
But most likely he would have done that anyway, it would just have been other things he would interpret in extreme ways. Like news, or conversations, or merely his own thoughts.
“How shall we fuck off O lord?”
deleted by creator
If you find yourself in weird corners of the internet, schizo-posters and “spiritual” people generate staggering amounts of text
They train it on basically the whole internet. They try to filter it a bit, but I guess not well enough. It’s not that they intentionally trained it in religious texts, just that they didn’t think to remove religious texts from the training data.
I lost a parent to a spiritual fantasy. She decided my sister wasn’t her child anymore because the christian sky fairy says queer people are evil.
At least ChatGPT actually exists.
Our species really isn’t smart enough to live, is it?
For some yes unfortunately but we all choose our path.
Of course, that has always been true. What concerns me now is the proportion of useful to useless people. Most societies are - while cybernetically complex - rather resilient. Network effects and self-organization can route around and compensate for a lot of damage, but there comes a point where having a few brilliant minds in the midst of a bunch of atavistic confused panicking knuckle-draggers just isn’t going to be enough to avoid cascading failure. I’m seeing a lot of positive feedback loops emerging, and I don’t like it.
As they say about collapsing systems: First slowly, then suddenly very, very quickly.
Same argument was already made around 2500BCE in Mesopotamian scriptures. The corruption of society will lead to deterioration and collapse, these processes accelerate and will soon lead to the inevitable end; remaining minds write history books and capture the end of humanity.
…and as you can see, we’re 4500 years into this stuff, still kicking.
One mistake people of all generations make is assuming the previous ones were smarter and better. No, they weren’t, they were as naive if not more so, had same illusions of grandeur and outside influences. This thing never went anywhere and never will. We can shift it to better or worse, but societal collapse due to people suddenly getting dumb is not something to reasonably worry about.
Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It’s not that people are getting dumber per se - it’s that they’re having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn’t even know for months if ever. Now? If an earthquake hits Paraguay, you’ll be aware in minutes.
And you’ll be expected to care.
Edit: Apologies. I wrote this comment as you were editing yours. It’s quite different now, but you know what you wrote previously, so I trust you’ll be able to interpret my response correctly.
Yes, my apologies I edited it so drastically to better get my point across.
Sure, we get more information. But we also learn to filter it, to adapt to it, and eventually - to disregard things we have little control over, while finding what we can do to make it better.
I believe that, eventually, we can fix this all as well.
I mean, Mesopotamian scriptures likely didn’t foresee having a bunch of dumb fucks around who can be easily manipulated by the gas and oil lobby, and that shit will actually end humanity.
People were always manipulated. I mean, they were indoctrinated with divine power of rulers, how much worse can it get? It’s just that now it tries to be a bit more stealthy.
And previously, there were plenty of existential threats. Famine, plague, all that stuff that actually threatened to wipe us out.
We’re still here, and we have what it takes to push back. We need more organizing, that’s all.
In the past our eggs were not all in one basket.
In the past it wasn’t possible to fuck up so hard you destroy all of humanity. That’s a new one.
Well, it doesn’t have to get worse, AFAIK we are still headed towards human extinction due to Climate Change
Honestly, the “human extinction” level of climate change is very far away. Currently, we’re preventing the “sunken coastal cities, economic crisis and famine in poor regions” kind of change, it’s just that “we’re all gonna die” sounds flashier.
We have the time to change the course, it’s just that the sooner we do this, the less damage will be done. This is why it’s important to solve it now.
Really well said.
Thank you. I appreciate you saying so.
The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it’s given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it’ll tell you is what a bunch of morons think is the truth. At worst, it’ll just tell you what you expect to hear. It’s what everybody else is already saying, after all.
And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.
Agreed. You’ve explained it really well!
Maybe there is a glimmer of hope as I keep reading how Grok is too woke for that community, but it is just trying to keep the the facts which are considered left/liberal. That is all despite Elon and team trying to curve it towards the right. This suggest to me that when you factor in all of human knowledge, it is leaning towards facts more than not. We will see if that remains true and the divide is deep. So deep that maybe the species is actually going to split in the future. Not by force, but by access. Some people will be granted access to certain areas while others will not as their views are not in alignment. Already happening here and on Reddit with both sides banning members of the other side when they comment an opposed view. I do not like it, but it is where we are at and I am not sure it will go back to how it was. Rather the divide will grow.
Who knows though as AI and Robotics are going to change things so much that it is hard to foresee the future. Even 3-5 years out is so murky.
What does any of this have to do with network effects? Network effects are the effects that lead to everyone using the same tech or product just because others are using it too. That might be useful with something like a system of measurement but in our modern technology society that actually causes a lot of harm because it turns systems into quasi-monopolies just because “everyone else is using it”.
Faulty wiring.