Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I’ve been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.
The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I’m thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)
Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That’s a no for me dawg.jpg.
I’m really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.
If my doctor refuses to let me be a patient if I don’t consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?
EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.
This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.
Edit 2: I just wanted to say that I appreciate everyone here that commented. For the most part everyone brought up valid points, and helped me see things I had not considered. I emailed my doctor and let them know I did not want to agree to the use of AI. I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn’t take issue with it.
Thank you everyone!
I would nope the fuck out and change doctors. A regurgitation machine prone to hallucinations has no place in medical care.
i would probably report him, and leave him a bad yelp review, warning others.
Yeah, though that’s about 4/5 of the actual people I’ve met working in psychology.
If this was for a GP, I would agree with this stance. But a good, fitting and competent mental health professional can be harder to find.
I don’t believe that. They just don’t want to pay them what they’re worth. Machines don’t ask for days off or health insurance, that’s their rationale. I hope they go out of business.
Definitely ask for how they are using it. I know a number of physicians that are just using it as a dictation software to quickly make a first draft for their paperwork, helps lighten a big load.
This is the answer.
Most docs can’t keep up with the mountain of paperwork or billing codes required by insurance companies these days. The software helps, but requires the doc to review and sign off the notes.
It’s not an LLM coming up with treatment plans, etc. It’s transcription+
Based on OPs edit that sounds exactly like what it’s doing.
Dictation and summary software could be installed onto the doctor’s computer.
There is something else going on here, with pushing an app onto patients.
The AI is the summary software. How else do you think the summary happens?
Lol, it happens on the doctor’s PC, without triggering clients.
OP. I’m a bit of an unfortunate expert in U.S. Healthcare.
The fact that you have a psychiatrist who you trust that has you on the right meds and have been with for 3 years is invaluable. You calling yourself stable is a huge thing. You wouldn’t be saying that if you weren’t on solid ground.
It would be completely crazy to give up a psychiatrist who is on your insurance over some AI garbage that is just transcribing notes for your doctor.
At bare minimum get a new psychiatrist who is on your insurance before switching. That should take about 6 months if you’re very lucky.
Play it through: do you want to lose a quality prescriber and talk therapist? Also, maybe you should just tell them you’re extremely concerned and see what they say or do.
The end result can’t be worse than you giving up on your mental health. You already know how hard it is to find quality psych care.
I 100% agree with you. I trust my doctor. I don’t trust the app. Prior to this we were using zoom.
As a person who has strong bipolar tendencies but not over the threshold for a diagnosis even I struggle with these sorts of things and often find myself asking “is this thought not just self sabotage at the end of the day?” I’m also physically disabled and go to a lot of doctors appointments, who now use AI for notes and I don’t like it either but to allow that to stand in the way of my care would absolutely be self sabotage. If my doctors started outsourcing other aspects of their jobs to AI I would seriously have a problem and would reconsider my position but note taking is incredibly time consuming for doctors and if using software that transcribes our conversations allows them to be better at their actual job of being a doctor that’s a compromise I can make, especially when I remind myself bipolar symptoms often get in the way of a persons willingness to compromise
That’s the biggest concern.
People who need life saving mental Healthcare are already engaging on a brave journey admitting they need help and it’d be a shame if AI crap got in the way of that.
People just don’t think.
A video-conferencing call is generally one-to-one with the clinician you know and have a relationship with.
An AI app on your phone opens your data to being viewed and scrutinised by a 3rd party within the medical practice or outside. (Which may be a positive, adding other insights that a single person may miss) Unless this is agreed, it would be a breach of patient trust. It seems the agreement you click gives your permission to share your data anywhere that ‘furthers treatment’.
It seems like massive over reach to install it on your phone, instead of on the doctor’s computer(where it could still summarise all interaction).
I would say you are right to want to move away from this kind of imposition. If do you change doctor, make sure to indicate that you will not install any apps as part of your treatment.
At the very least I would install the app under a seperate user than my main account.
honest question. was it no problem that zoom was being used for the sessions? I am asking because by the post, you seem to care about your privacy
The zoom sessions weren’t being recorded, or being analyzed by AI to create a transcript. I met with my DR via zoom, and the DR took notes.
I understand that. my point is that zoom has access to the video and audio feed in transit. Despite them being very popular, they have lied about their systems without pause when they became big during covid, including that their system is end to end encrypted, which it is not.
there are better alternatives for it but unfortunately only a little fraction of the people know about them.
to be clear I support you if you are looking for preserving your privacy with this AI transcription, just wanted to let you know that information was already leaking, even if laws have baselessly believed that they did not.
Thanks!
what the hell! the question is nothing crazy, mindlessly accepting it is what is crazy! it is a hard situation to be in forsure, but it seems you have zero ideas about the consequences if you think its just “some AI garbage that is just transcribing notes”
No. Absolutely not. I csnnot trust any current AI model with HIPPA compliance.
Find another doctor. I just had to fire my therapist because when I went in for this week’s appointment they were playing some jesus worship service and song. I told her that it was our last session because I no longer had trust in their offices and added that I had no faith any progress would ever be made after I was triggered waiting to see my therapist. It could have been the receptionists choice in music or someone else from their office but since they do not advertise as a faith based therapy group they should have left that shit at home or should expect more of the same from people like me.
It’s worth researching a therapist’s credentials, some states allow “pastoral counseling degrees” and so on to be a path to “mental health therapist.” You want LISW, a licensed social worker. I’m not saying there aren’t weirdos, or that your experience wouldn’t happen with a social worker… just that many folks don’t realize some therapists went to theology classes instead of psychology classes, which is a prime setup for problems.
probably better to look for a licensed psychologist/psychiatrist, or someone with a PsyD. dont really want to risk when someone isnt in the field.
I didn’t know about the theology to therapist route. My therapist herself never indicated their faith leanings, so credit due to them there. They have a Masters and are a LPC. As I mentioned before, it’s entirely possible she had nothing to do with nor endorses the music choice in the building, but tacit endorsement by not stopping it from happening is enough for me to leave.
Maybe, just maybe, let’s not play music from the loudest hate group in the USA in the lobby of the therapist office.
Can you ask how AI is used in the app?
I can, but in truth I don’t care. I don’t want my data being used to train AI, and I don’t want my treatment to be guided by AI.
The “fine print” you added doesn’t say the automated transcript will be used for training a model. I’d highly, highly doubt HIPAA protected clinic notes would be use for training an LLM. If they did, the clinic would go bankrupt from lawsuits.
Also, if they only use AI for automated transcription, would you feel the same instead of “AI” it were a dedicated automated transcription tool?
If you abhor all things AI, your feelings of not continuing with this clinic are valid. However, I don’t think they are using AI in ways you think they are.
If they did, the clinic would go bankrupt from lawsuits.
for that, patients would need to be able to prove that their data was used. how would you be able to prove it?
Being disappeared for being mentally ill, trans, or gay, which conservatives would love to have rebranded as mental illness. Assuming you had a lawyer on retainer before you were disappeared, and family willing to fight for you while you languish in a concentration camp.
So ask about those two specific points.
And in the session you can (probably) go over the generated notes with your doctor to double check.
The term AI is very broad and generic, today it’s used to refer to LLMs and fancy denoisers. But AI has been around for decades in some form or another. My point is, speech transcription has been around longer than the current LLM fad, so it might not be an LLM doing your transcription. Would that allay some of your concerns?
If it were a locally ran transcription software, would a healthcare provider still be required to ask your permission to use it?
I very much hope so, because in neither case can thry guarantee that the data won’t be transferred elsewhere
It doesn’t sound like AI is being used for either. It’s just summarizing the encounter at the end as a note, and not storing any data to train on.
And to piggy back this question: what alternatives do you have and are they actually viable?
The alternative is finding a different provider. I already have a long list of offices to call. Getting a list together was the first thing I did when they notified me about rolling out this app.
It records the sessions then makes a transcript for “note taking.”
I feel very strongly about this and I would change doctors. But of course it won’t be long before they all do this and we’ll have no alternative. The two biggest problems I see are
-
I saw a news story where a doctor who uses this said it saves her time because before seeing the patient she gets an AI summary of their chart, so she doesn’t have to “go through several tabs” to read the actual information. Oh great, let the statistical probability text generator hallucinate up some shit about what’s in a person’s chart, to save 10 seconds of tab-clicking to read the ACTUAL patient records! If they want a summary there’s no reason a traditional report or summary screen couldn’t be programmed to pull data out of the most important fields and arranging them in the desired format.
-
THEN the doctor uses her damn phone to record your visit, everything you say, and that gets run through the AI which generates a visit summary and puts that into your medical records. So, god only knows what 3rd party private corporate vulture has access to your doctor/patient conversations and what they’ll do with them, and again, what hallucinated shit will get put into your medical records!
So your doctor never reads your chart and never writes your chart! [Readacted] me now! Also what happens after a few iterations of an AI summarizing records that an AI wrote?
AI is really good at concepts, not logic. But even then the performance is going to be dependant of the data it was modelled with.
You can ask for a specific symptom of pneumonia and it can answer. You can also ask for a summary of pneumonia, as someone has most likely wrote one already and AI understand to use it because of the concept relevance. But if you ask it to summarize a patient information, it will split the patient information into blocks it can summarise based on what summarisation information it has in the model data. I can assure you it cannot ever have all the possibilities pretrained already.
My fear is that the models merge all kind of patient record info together as the statistical model so the ‘summaries’ will write the most likely word to come next in the phrase, so wrong information and incorrect diagnoses will be recorded into a person’s record, or that important information will be omitted.
I predict that people will be harmed or die because of missing or false information patient records. But it will be difficult for the public to find out about it because of privacy issues and the unwillingness of institutions to acknowledge it.
Drugs have to go through multiple stages of testing and trials before they’re allowed to be used on patients. But no one is doing any kind of testing on the effects of this at all, let alone controlled trial rollouts with review, before allowing general use.
-
An AI tool does NOT prioritize privacy. It’s literally the opposite.
It would be an absolute deal breaker for me. There has never yet been a commercially available AI that doesn’t hallucinate, and there’s no element of my healthcare where I’m comfortable having facts be unreliable.
I’m a therapist and I use SimplePractice for my practice. They recently added an AI note taker that is HIPAA compliant, and the consent form they suggest giving to clients sounds okay, but I read the actual privacy policy and the language used is way too vague for me to trust, so I don’t use it.
In your position, I would:
-
Ask if you have to sign that, or if you can opt out. Your specific provider may be open to just not enabling the AI note taker for your profile, and they may be able to remove that form from the app for you on their end. This may not be in their control, but if they’re a good person who cares about you, they’ll make an effort to get it done anyway.
-
If not, ask for a link to the actual privacy policy and see if it sounds acceptable to you. Not the practice’s Privacy Practices, not the Patient Portal privacy policy, but the actual privacy policy for the AI note taker (whoever you ask might have to do some digging to actually find it)
-
My medical provider started doing that when I last had a video conference with them, and I declined to allow the use of AI. They took no issue with that – didn’t even bring it up. It’s very unlikely that your provider will care that you declined either. I recommend saving your energy for other problems and dealing with this later in the unlikely event that they do actually make an issue of it.
out of curiosity, what platform do they use for the video conference?
Last time I talked to them, they used Zoom.
The privacy statements are fucking lies.
I will not share my innermost mental issues with some group of 20-something “move fast and break things” sociopaths in Silicon Valley.
Fuck no. I wouldn’t even install the app. That’s already completely unnecessary.
I would be out of that office faster than the speed of light.
I know this might go against the flow here, but realistically if they’re using the tools in the way they say they are (which you should 100% check with your doctor to let him know about possible hallucinations) it’s not that bad. Speech-to-text is not prone to hallucinate, it can fail and detect wrong things but shouldn’t outright hallucinate. After that, LLMs are good at summarizing things, yes they are prone to hallucinations which is why having the doctor review the notes immediately after the session is important (and they said they do), so I don’t see this as such a big issue from the usability point of view.
You might still have issues from a privacy point of view and that’s a much more complex discussion with them about what kind of contract they have with the LLM company to ensure no HIPAA violations (as from the LLM point of view it’s just making a summary of a text it might store it, and then the whole stack is suable). They need to understand that just because they haven’t kept a copy around doesn’t mean the other party hasn’t, and because they shared it out without your agreement (you’re only agreeing to AI note taking which can be done locally so them sharing information with third parties is entirely up to them) they would be liable. I’m not a lawyer, so you might want to double check that, but I would be very surprised if that’s not the way it works, otherwise Drs could get away with a bunch of HIPAA violations by having you sign something that says they use a computer to store data and then storing things in shared Google drive.
After that, LLMs are good at summarizing things
It depends. For programming, I’ve tried using them to write commit messages and they suck at it. And for healthcare they’re not summarizing blog posts, they’re dealing with potential life or death scenarios. Doctors have expert knowledge to catch details that LLMs won’t pick up on, and LLMs won’t notice nonverbal cues either which constitutes a large portion of communication. Doctors also have a thought process to log that LLMs don’t have either. Even if the doctor reviews the notes afterward, the quality will probably be worse than before.
I feel like the doctor and the patient should have to sign off on notes even without AI.
AI and the people pushing it are not trustworthy. They do not have your data security nor your wellbeing at heart, even if your doctor does. LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance. Likely, the AI use will be on the part of the insurance company to find ways of denying your claims.
LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance
This is simply false. AI sucks but it doesn’t help to lie about it.
EDIT:
Go run a local model on your own computer, and delete the context when you are done. Boom you just used an LLM in a way that maintains your data security.
So your example doesn’t prove a damn thing; the data security in that case had nothing to do with the llm…
data security in that case had nothing to do with the llm
That’s kinda my point.
This is about extracting data that was used as training data. Just don’t do that with sensitive data.
You think they won’t use this the same way? That’s adorable.
“I don’t trust companies to hold their promises” is a very different argument from:
LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance
It is certainly possible to implement a secure LLM service.









