The healthcare landscape is changing fast thanks to the introduction of artificial intelligence. These technologies have shifted decision-making power away from nurses and on to the robots. Michael Kennedy, who works as a neuro-intensive care nurse in San Diego, believes AI could destroy nurses’ intuition, skills, and training. The result being that patients are left
There is a company hospitals have hired to transcribe recordings. The software makes many transcription errors and then deletes the original audio. Things aren’t looking good.
Where’s anyone saying it’s worthless? That’s not in the article nor in these comments.
The issue is how it’s being used. It’s not being used to detect cancer. It’s being used for “efficiency”, which means more patients being seen by fewer nurses. It’s furthering the goals of the business majors in hospital administration, not the nurses or doctors who are caring for the patient.
LLMs are largely worthless (in the context of improving human society).
Neural Nets aimed at much more specific domains (recognizing and indicating metastases or other abnormalities in pathology slides for human review, for example) are EXTREMELY useful and worthwhile.
Ai nearly everywhere is to improve efficiency, less people become more productive so that the owners keep more money. Because a pay rise because of it is off the books. Since now you need to be “less skilled” anyway.
Machine learning for helping a radiologist analyze images is super helpful and a mature field.
Whatever “AI” LLM nonsense tech bros are trying to add in to everything in the last 2 years is probably not all that helpful, but i could be proven wrong
Hallucinations aren’t a problem with the actually medically useful tools he’s talking about. Machine learning is being used to draw extra attention to abnormalities that humans may miss.
You are right. My pet peeve is that it is now used as a marketing term without actual meeting. Used to be the word smart. Now instead of “buy this smart toaster”, “buy this AI powered toaster”. Sorry if this reply was too verbose for your liking.
You need AI to reword this spaghetti of an article. I know plenty of nurses, most of them shouldn’t be making decisions.
But I do agree that slapping AI on everything is a poor idea.
There is a company hospitals have hired to transcribe recordings. The software makes many transcription errors and then deletes the original audio. Things aren’t looking good.
Shitty software existed before AI too. Things have always looked bad in hospitals.
Hasn’t AI already been shown to be better at catching things like cancer than humans?
There are some things that computers can be better at than humans.
Yes! And we should use it when it has been proven effective. But the AI shouldn’t be able to administer drugs.
For sure. There always needs to be a human in the loop. But this notion people seem to have that all AI is completely worthless just isn’t true.
What’s scary is the hospital administration that will use AI to deny care to unprofitable patients (I’ve listened in on these conversations).
Where’s anyone saying it’s worthless? That’s not in the article nor in these comments.
The issue is how it’s being used. It’s not being used to detect cancer. It’s being used for “efficiency”, which means more patients being seen by fewer nurses. It’s furthering the goals of the business majors in hospital administration, not the nurses or doctors who are caring for the patient.
LLMs are largely worthless (in the context of improving human society).
Neural Nets aimed at much more specific domains (recognizing and indicating metastases or other abnormalities in pathology slides for human review, for example) are EXTREMELY useful and worthwhile.
Ai nearly everywhere is to improve efficiency, less people become more productive so that the owners keep more money. Because a pay rise because of it is off the books. Since now you need to be “less skilled” anyway.
Machine learning for helping a radiologist analyze images is super helpful and a mature field.
Whatever “AI” LLM nonsense tech bros are trying to add in to everything in the last 2 years is probably not all that helpful, but i could be proven wrong
It’s also been shown to hallucinate whole parts of the doctor/nurse discussion and instructions
Dogs are can be also better at detecting cancer than humans. And dogs tend to hallucinate less
Hallucinations aren’t a problem with the actually medically useful tools he’s talking about. Machine learning is being used to draw extra attention to abnormalities that humans may miss.
It’s completely unrelated to LLM nonsense.
Perhaps, we should consider not calling all of them as AI. Machine learning is a useful tool.
“AI” long predates LLM bullshit.
You are right. My pet peeve is that it is now used as a marketing term without actual meeting. Used to be the word smart. Now instead of “buy this smart toaster”, “buy this AI powered toaster”. Sorry if this reply was too verbose for your liking.
They’re better at smelling cancer than humans.
I’m not sure we can definitively say they hallucinate less.
I love it when writers editing the words of others somehow can’t pass grade-school writing classes.
?
Do expect doctors to do it or something?
To make the decisions for the patient? Uhh yeah. And they do. Check the chart.