I know it’s not even close there yet. It can tell you to kill yourself or to kill a president. But what about when I finish school in like 7 years? Who would pay for a therapist or a psychologist when you can ask for help a floating head on your computer?
You might think this is a stupid and irrational question. “There is no way AI will do psychology well, ever.” But I think in today’s day and age it’s pretty fair to ask when you are deciding about your future.
deleted by creator
homie lemme let you in on a secret that shouldn’t be secret
in therapy, 40% of positive client outcomes come from external factors changing
10% come from my efforts
10% come from their efforts
and the last 40% comes from the therapeutic alliance itself
people heal through the relationship they have with their counselor
not a fucking machine
this field ain’t going anywhere, not any time soon. not until we have fully sentient general ai with human rights and shit
I don’t think there’s harm in allowing people who would never be able to afford life-saving medicine to have life-saving medicine cat-puzzle-feeder style
Edit: this was me and access hasn’t changed the fact that I do no generally derive value from it.
You realize that adds up to 60% right?
40 40 10 10
math moment
I won’t trust a tech company with my most intimate secrets. Human therapists won’t get fully replaced by ai
I don’t think the AI everyone is so buzzed about today is really a true AI. As someone summed it up: it’s more like a great autocomplete feature but it’s not great at understanding things.
It will be great to replace Siri and the Google assistant but not at giving people professional advice by a long shot.
Not saying an LLM should substitute a professional psychological consultant, but that someone is clearly wrong and doesn’t understand current AI. Just FYI
Care to elaborate?
It’s an oversimplified statement from someone (sorry I don’t have the source) and I’m not exactly an AI expert but my understanding is the current commercial AI products are nowhere near the “think and judge like a human” definition. They can scrape the internet for information and use it to react to prompts and can do a fantastic job to imitate humans, but the technology is simply not there.
The technology for human intelligence? Any technology would be always very different from human intelligence. What you probably are referring to is AGI, that is defined as artificial general intelligence, which is an “intelligent” agent that doesn’t excel in anything, but is able to handle a huge variety of scenarios and tasks, such as humans.
LLM are specialized models to generate fluent text, but very different from autocompletes because can work with concepts, semantics and (pretty surprisingly) with rather complex logic.
As oversimplification even humans are fancy autocomplete. They are just different, as LLMs are different.
You are putting WAY too much faith in the ability of programmers. Real AI that can do the job of a therapist is decades away, at least - and then there’s the approval process, which will take years all by itself. Don’t underestimate that. AI therapy is uncharted territory, and the approval process will be lengthy, detailed, and incredibly strict.
Lastly, there’s public acceptance. Even if AI turns out to have measurably better outcomes, if people aren’t comfortable with it, statistics won’t matter. People aren’t rational. I don’t care how “good” Alexa is, or how much evidence you show me - I will never accept that a piece of software can understand what it’s like to grow up as a person. I want to talk about my issues with a flawed, fallible human, not a box plugged into the wall.
You ask a valid question, just much earlier than necessary. I’d be surprised if AI was a viable alternative by the time you retire.
There are already digital therapeutic platforms approved for mental health. Orexo deprexis is one such program. The fact is that the vast majority of people who need therapy aren’t getting it now. These ai therapy models will provide services to those people. I’m willing to bet that in a decade, the majority of therapy will be done by AI, with human therapists focused on the most severe behavioral health conditions.
Dr Sbaitso was proven to be clinically effective in the 1980s.
No, it won’t. I don’t think I would have made it here today alive without my therapist. There may be companies that have AI agents doing therapy sessions but your qualifications will still be priceless and more effective in comparison.
AI cannot think, it does not logic or reason. It outputs a result from an input prompt. That will not solve psychological problems.
It’s what AI does at the moment. Which may not necessarily be true in a few years, what’s what OP is asking about.
At the end of the day AI (no just the LLM we call AI now) are really good at doing boring machine work. These tasks are repetitive, simple and routine. This includes all the LLM which can summarize boring text and generate more boring text. It can’t generate anything new but just output and rearrange.
What there will be always need for are human work. This includes creativity, emotions and human interaction. A machine can’t replace that at all. Psychology and therapy are all emotions and human interactions so it might be the most safe career choice. Same with something like haircutting or other career that involve human wisdom and personal skills.
Boring jobs like sending and receiving emails might be replaced. The reason businesses are so scared is that the majority of people in an office just do that
Hey, maybe your back ground in psychology will help with unfucking an errant LLM or actual AI someday :P
Given the vast array of existing pitfalls in AI, not to mention the outright biases and absence of facts - AI psychology would be deeply flawed and would more likely kill people.
Person: I’m having unaliving thoughts, I feel like it’s the only thing I can do
AI: Ok do it then
That alone is why it’ll never happen.
Also we need to sort out how to house, heal and feed our people before we start going and replacing masses of workforce.
The level of liability you’d expose yourself actively advertising it as some sort of mental health product is insane.
I do believe someone will be dumb enough, but it’s a truly terrible, insanely unsafe idea with anything resembling current tech in any way.
Here’s a case study for you: An eating disorder hotline got rid of the humans in favor of an AI chatbot. Lasted less than a week before it was giving horrible advice.
https://www.theguardian.com/technology/2023/may/31/eating-disorder-hotline-union-ai-chatbot-harm
Psychology will be controlled by humans, probably forever.
AI won’t do psychology redundant. Might allow for an easier and broader access to low level psychological first support.
What is more likely to make psychological consultants a risky investment is the economic crisis. People are already prioritizing food over psychological therapy. Psychological therapy unfortunately is nowadays a “luxury item”.
The caring professions are often considered to be among the safest professions. “Human touch” is very important in therapy
I think it is one of these things that AI can’t make redundant, never.
All my points have already been (better) covered by others in the time it took me to type them, but instead of deleting will post anyway :)
If your concerns are about AI replacing therapists & psychologists why wouldn’t that same worry apply to literally anything else you might want to pursue? Ostensibly anything physical can already be automated so that would remove “blue-collar” trades and now that there’s significant progress into creative/“white-collar” sectors that would mean the end of everything else.
Why carve wood sculptures when a CNC machine can do it faster & better? Why learn to write poetry when there’s LLMs?
Even if there was a perfect recreation of their appearance and mannerisms, voice, smell, and all the rest – would a synthetic version of someone you love be equally as important to you? I suspect there will always be a place and need for authentic human experience/output even as technology constantly improves.
With therapy specifically there’s probably going to be elements that an AI can [semi-]uniquely deal with just because a person might not feel comfortable being completely candid with another human; I believe that’s what using puppets or animals or whatever to act as an intermediary are for. Supposedly even a really basic thing like ELIZA was able convince some people it was intelligent and they opened up to it and possibly found some relief from it, and there’s nothing in it close to what is currently possible with AI. I can envision a scenario in the future where a person just needs to vent and having a floating head just compassionately listen and offer suggestions will be enough; but I think most(?) people would prefer/need an actual human when the stakes are higher than that – otherwise the suicide hotlines would already just be pre-recorded positive affirmation messages.
By the way, if you want to try Eliza, you can telnet into
telehack.com
and run the commandeliza
to launch it.You still had some good/new points in last paragraph. Thx