But in her order, U.S. District Court Judge Anne Conway said the company’s “large language models” — an artificial intelligence system designed to understand human language — are not speech.
But in her order, U.S. District Court Judge Anne Conway said the company’s “large language models” — an artificial intelligence system designed to understand human language — are not speech.
I get that hating on anything AI-related is trendy these days - and I especially understand the pain of a grieving mother. However, interpreting this as a chatbot encouraging someone to kill themselves is extremely dishonest when you actually look at the logs of what was said.
You can’t simultaneously argue that LLMs lack genuine understanding, empathy, and moral reasoning - and therefore shouldn’t be trusted - while also saying they should have understood that “coming home” was a reference to suicide. That’s holding it to a human-level standard of emotional awareness and contextual understanding while denying it the cognitive capacities that such standards assume.
Source
All you need to argue is that its operators have responsibility for its actions and should filter / moderate out the worst.
That still assumes level of understanding that these models don’t have. How could you have prevented this one when suicide was never explicitly mentioned?
You can have multiple layers of detection mechanisms, not just within the LLM the user is talking to
I’m told sentiment analysis with LLM is a whole thing, but maybe this clever new technology doesn’t do what it’s promised to do? 🤔
Tldr make it discourage unhealthy use, or else at least be honest in marketing and tell people this tech is a crapshot which probably is lying to you
That AI knew exactly what it was doing and it’s about time these AIs started facing real prison time instead of constantly getting a pass
Lock it up! Lock it up! Lock it up!
You could think of LLMs as a glorified ‘magic 8 ball’, since that’s about as much ‘understanding’ it has.
Removed by mod