But in her order, U.S. District Court Judge Anne Conway said the company’s “large language models” — an artificial intelligence system designed to understand human language — are not speech.
But in her order, U.S. District Court Judge Anne Conway said the company’s “large language models” — an artificial intelligence system designed to understand human language — are not speech.
All you need to argue is that its operators have responsibility for its actions and should filter / moderate out the worst.
That still assumes level of understanding that these models don’t have. How could you have prevented this one when suicide was never explicitly mentioned?
You can have multiple layers of detection mechanisms, not just within the LLM the user is talking to
I’m told sentiment analysis with LLM is a whole thing, but maybe this clever new technology doesn’t do what it’s promised to do? 🤔
Tldr make it discourage unhealthy use, or else at least be honest in marketing and tell people this tech is a crapshot which probably is lying to you