Key Facts:
-
The AI system uses ten categories of social emotions to identify violations of social norms.
-
The system has been tested on two large datasets of short texts, validating its models.
-
This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.
This will absolutely be used to oppress the neurodivergent at some point
Then it’s all Butlerian Jihad, bay-bee!
Could be helpful if it silently (or at least subtly) warns the user that they’re approaching those boundaries. I wouldn’t mind a little extra assistance preventing those embarrassing after-the-fact realizations. It’d have to be done in a way that preserves privacy though.
Like most scientific and technical advances, it could be an amazing tool for personal use. It won’t, of course. It will be used to make someone rich even richer, and to control or oppress people. Gotta love humanity.
Still dangerous, an authority could subtly shift those boundries in order to slowly push your behaviour in a desired direction.
Definitely a hazard. My ideal solution is something that could be built and evaluated in a way that allows me to know that it does what it’s supposed to do and nothing else. From there, I’d want to run it on my own hardware in an environment under my control. The idea is to add enough layers of protection that it’d be easier and less expensive for that authority to change my behavior by hiring goons to beat me with a wrench. At least then I’ll have a fairly unambiguous signal that it’s happening but getting to that point would take a significant investment of effort, time and money.
Everyone, starting with the neurodivergent. Or some other favoured boogieman-of-the-day such as LGBT+ people.
As long as this doesn’t get repurposed to regulate “social credit”, this is fine.
That’s the scary part of it. Idc how good it is, but if it starts to be used to censor information and rate humans, that’s the line.
The line will come far far FAR before that
but if it starts to be used to censor information and rate humans, that’s the line.
That line has already been crossed. Since it’s already been crossed, it’s inevitable that this will be used in that way.
there is no application for this that is actually good
It could help identify and measure people on the autistic spectrum or similar.
And what good comes of possibly covertly testing individuals to an autism test.
What does the examiner do with the results? Or what does their boss do with the results?
No good!
Well I mean in a functioning system itd be private medical documents, and used to give the best treatment per patient
In our system it’ll be used by a private company as “their” data and sold to whoever will pay
Yeah I don’t think anyone using this to do that is going to do good things with that information
Why are we letting an AI determine social norms? Social Norms changes like every 5 years.
My dumb autistic ass be all like
Can we like… maybe have some good as in morally use cases for AI?
I know we had the medical diagnosis one, that was nice. Maybe some more like that?
I’m extremely skeptical of medical diagnosis AIs. Without being able to explain why it comes to a conclusion, how do we know it won’t just accidentally find correlations? One example I heard of recently was an AI that was extremely good at detecting TB… based on the age of the machine that took the x-ray. Because it turns out places with older machines tend to be poorer, and poorer places tend to have more TB.
The only positive use I can think of is time saving measures. A researcher can feed a study to ChatGPT and have it write a rough first draft of the abstract. A Game Master could ask it for inspiration on the next few game sessions if they’re underprepared. An internet commenter could ask it for a third example of how it could save time.
But for anything serious, until it can explain why it comes to the conclusions it comes to, and can understand when a human says “no, you’re doing it wrong,” I can’t see it being a real force for good.
Ehh…at least we know we don’t understand how the AI reached its conclusion. When you study human cognition long enough you discover that our beliefs about how we reach our conclusions are just stories the conscious mind makes up to justify after the fact.
“No, you’re doing it wrong” isn’t really a problem - it’s fundamental to most ML processes.
deleted by creator
deleted by creator
No, more like the ones that give early warning signs of like, dementia or something.
deleted by creator
Like Black Mirror !!
Removed by mod
That’s definitely not going to be abused, at all.
Oh good, the family feud method of social norms. No value judgement, just “Survey says!”.
Unless this is just for identifying social norms violations in written communication for the purpose of government to government communication, this seems vastly… Infeasible, I guess. Because norms change over time, and you’re going to have to be updating this model when it’s finally noticed that a change has occurred. If anything, it might generate a completely new form of grammar/phrasing expectations due to the feedback from this likely-to-not-change-very-much ruleset… As in, if you thought politically correct phrasing was annoying now, just wait until the ai says you’re not doing it well enough.
Idk though, this isn’t my specialty area, anyone care to tell me how I’m wrong? What good can this really do?
(I swear I did read the article, it just isn’t clicking over the sound of my loud pessimism)
This sounds like a prequel to the show Psycho-Pass.
Psycho-Pass raises a lot of interesting questions and dilemmas because the technology depicted actually works in that setting (aside from a few outlying cases that help drive interesting plots). If there was a scanner of some sort that actually genuinely could detect violent intention before it was acted on then I think it would be reasonable and moral to use it in at least some manner.
This will absolutely be used to oppress the neurodivergent at some point
Damn, now I want them to sample me just to see how much I can drive the model insane.