This is a title of all time. Seems like it’s supposed to teach how to express empathy, not how to have empathy. That title goes in the books for being a fuck up of all time.
Every autistic person I’ve met has had more empathy than every Republican I’ve met.
Glad you brought up US politics here.
The terrible title makes it seem like autistic people do not experience empathy. I would say there is a ten to one disparity in the capacity for empathy autistic people hold versus the capacity for empathy people are willing to demonstrate readily toward autistic people. Somebody brings up Republicans and that bothers you, here’s a thought: are you sure your country has a conservative element that isn’t following behind where American conservatives are leading with their increasingly openly eliminationist actions?
Its like the fact that A LARGE SUPERPOWER run by IRRATIONAL NAZIS might come up in your day to day life as regards the rights and dignities of minorities as you use this PLANETARY INFORMATION NETWORK.
Ohhh feddit.uk. My friend your LABOR party is practically synonymous with our conservatives at this point.
I’m not from UK.
The implication being that autistic people have none? Wow.
No, the implication is that they have trouble expressing it. Which is accurate.
You can read that from the article text, but a) the text doesn’t appear to actually suggest autistic people do have empathy, which is a problem since b) the title absolutely implies they don’t.
At best, this is a terrible headline. But if I’m being honest, I don’t have much respect for an article that seems to be all too eager to tout the erstwhile benefits of an LLM, let alone one that is in all likelihood teaching people how to act more like an LLM. So I’m not inclined to take a charitable interpretation.
the text doesn’t appear to actually suggest autistic people do have empathy
Is that something that you need explained?
Did you stop reading the rest of the post when you saw that? Because it really looks like you did.
Sure didn’t.
You didn’t stop reading? Then it’s a bit weird that you’d think I don’t know autistic people have empathy, unless you decided to arbitrarily take the most bad faith reading you could’ve done. If that’s the case, I recommend taking breathers before posting so that you don’t do that.
it’s a bit weird that you’d think I don’t know autistic people have empathy
It’s a bit weird that you think that I think that, and not that I was suggesting that no one else needs it explained to them either.
Research has shown that practicing social interactions with professionals in a clinical face-to-face intervention can improve outcomes for individuals, but these solutions are often costly or not widely available.
the common theme every single time I read about LLM chatbots being used for mental health - having human therapy is great but it’s just too expensive for regular people. and that’s treated as an immutable fact about society that can’t be changed. (“it is easier to imagine an end to the world than an end to capitalism”)
human therapy is too costly? OK, make it cheaper, or free, for the patients. it’s not widely available? OK, pay the therapists more, and give them better working conditions.
but where will the money to do that come from?
Silicon Valley is spending billions of dollars building AI datacenters. so I dunno, where is that money coming from?
resource allocation is a choice that we as a society, and a species, make. we can make different choices. we don’t need to confine ourselves to “well human therapy is expensive, so only rich people can access it, and poor people have to settle for AI slop, but they should be grateful because without the AI slop they’d have nothing at all”.
It’s not about capitalism:
- 1 human can talk to 1 human
- 1 chatbot can talk to 8 billion humans
Human therapy will be more expensive, for as long as we value human time more than machine time.
The article:
A specialized chatbot named Noora is helping individuals with autism spectrum disorder practice their social skills on demand.
Knowing what to say when co-workers tell you about their weekend is a social skill that many take for granted, but for some individuals with autism spectrum disorder (ASD), this social interaction can be challenging. Struggling to find the correct response in social situations like these can negatively impact social and professional relationships for people with ASD, and can worsen co-occurring conditions like depression.
Research has shown that practicing social interactions with professionals in a clinical face-to-face intervention can improve outcomes for individuals, but these solutions are often costly or not widely available. Lynn Koegel, a clinical professor of psychiatry and behavioral sciences at Stanford University, in collaboration with Professor Monica Lam from Stanford’s Computer Science Department, are the authors of recent research published by the Journal of Autism and Developmental Disorders that investigates the role of AI in filling this gap.
“Our research has shown that face-to-face work does help with social conversation … so we wanted to see if we could translate that into computer use,” said Koegel. “Accessibility is really important because a lot of people don’t have access to a face-to-face provider and the providers can be really expensive.”
Introducing Noora In this work, funded in part by a seed grant from the Stanford Institute for Human-Centered AI and the Kind World Foundation, Koegel and colleagues evaluated interactions between participants with ASD who struggle with social interactions and Noora, a chatbot built with a large language model (LLM).
In one-on-one interactions, which can be written or spoken, Noora offers individualized guidance on a number of social communication scenarios; it helps users learn to ask questions, give compliments, respond empathically, and with other areas of social communication that are often challenging.
In this recent work, Koegel focused on the impact of Noora’s empathy module. The chatbot first offers a leading statement, such as “I’m feeling really tired lately and it’s been so hard to concentrate,” and then asks the user to assess whether the statement is positive, neutral, or negative. Noora will then grade this response and ask the user to respond empathically to the initial statement. Based on whether the user successfully responds with empathy, Noora either offers a gentle correction or validates a correct response.
The research team carefully crafted prompts with representative examples to ensure that the answers are appropriate. To interact with users, Noora needed to know three things: what kind of statements warrant an empathic response, how to assess whether a user has responded empathetically, and how to offer a user helpful feedback to improve the response if it lacked empathy.
To craft leading statements, the team exposed Noora to “golden” responses and to inappropriate responses. The team both wrote responses themselves and used the LLM to write other responses that they then verified, creating a pool of 330 statements designed to elicit empathetic responses from participants. This means that Noora was never creating leading statements on the fly, which could have potentially led to inappropriate questions.
When it came to responding live to users’ empathetic responses, Noora had freer rein. Taking advantage of the LLM’s abilities for in-context learning, the team simulated users’ personalities and had Noora practice responding to users that showed varying levels of empathy. They also selected difficult cases and provided feedback for Noora to learn from.
An example of the NOORA interface and a sample of an interaction.
Putting Noora to the Test To see how well Noora stacked up against treatment as usual, Koegel, Lam, and colleagues conducted a randomized trial with 30 participants in which half were assigned to use Noora for four weeks and half received no intervention. Participants using Noora were asked to complete 10 trials per day, five days a week for a total of 200 trials.
Ultimately, Koegel said, the team was looking to evaluate whether Noora could improve users’ empathetic responses and if improvements could be generalized to show empathy in human-to-human communication as well.
“There’s a lot of AI research out there that shows that the ASD students improve using a program, but doesn’t show that it generalizes to real life,” said Koegel. “So that was our main goal.”
Comparing responses from the start of the experiment to the end, Koegel said that 71 percent of participants improved their number of empathetic responses when using Noora.
To see whether this progress could generalize, the team had participants take part in a Zoom call with a team member before and after the experiment, which included leading empathic statements. When reassessed after the intervention, the experimental group scored significantly higher than the control group with an average increase of 38 percent, while the control groups pre- and post-scores were similar. This shows that just four weeks of using the AI program significantly improved verbal empathetic responses.
With this success under Noora’s belt, Koegel and Lam are now interested in testing the effectiveness of other modules as well. They’re also working to open Noora for beta testing for public use and in clinical settings.
Beyond Noora, Koegel said she’s also incorporating AI into other aspects of her autism research as well, including motivational treatment for children with ASD who are in the beginning stages of using communication.
“I’d like to take a lot of the work that I’ve done over the decades that’s face-to-face and see how much we can translate to AI,” said Koegel. “Since kids really like the computer, we want to see if instead of just spending time on their computer or iPhone we can create a learning experience.”
“Empathy”… REALLY? ffs.





