A bill under consideration in New York would provide a private right of action, allowing people to file lawsuits against chatbot owners who violate the law.
If implemented, that would just ban chatbots that use large language models. It’s not a terrible idea.
What would actually happen is that so-called AI chatbot systems would try to detect if someone is from New York and then try to exclude them from receiving medical or legal advice, fail, and then get sued and then pay a small fine, over and over again forever.
First because healthcare is clearly being gatekept from people.
Second, because even if you go to a healthcare professional nowadays, there is no guarantee that that person is not a fucking idiot that doesn’t believe in vaccines. I can’t believe I have to actually ask people before they touch me if they believe in vaccines or not and then tell them to not come back into my room if they answer that they don’t believe in science. But that has happened and it has happened to the people I’ve taken care of and because of this now healthcare can’t be trusted.
The LLM is not any worse than that. In fact, I would say that it’s already too cautious. No way the model is ever going to tell me vaccines are bad. It’s not going to tell me to take a poison to clear Covid. It’s not going to tell me to drink bleach like the president did. It’s literally not any worse than the bullshit we are dealing with all day every fucking day.
And I’m getting to the point that if you’re a full grown human fucking being and you’re going to believe something if it tells you to drink fucking bleach or swallow a fucking lightbulb then that’s nature saying something about you.
Naw, completely disagree. If you had a calculator you knew was defective you would ban doctors and lawyers from using it.
You also seem to think that LLM is going to be inherently more accurate than a expert human. We can see with GrokAI how easy it is to manipulate an AI into saying racist white nationalist garbage. So we are not just trusting the technology but also a layer of unpredictable corporate meddling.
Why does the LLM recommend this drug but not the other one? We quickly see how a corporation could favor a certain medication due to behind the scene deals or even push a medication.
You can’t trust a black box you are not allowed to look into. Trust in a LLM at this point is pure folly.
Funny thing is LLMs are bad as calculators too, since I’ve seen it get simple multiplication wrong.
It’s capable of generating content, but unable to verify or know itself if it is correct. But, lot of people don’t realize that because the less they know about a subject matter the smarter it will seem to them not knowing its well…a language model. As in just outputting what can be complete gibberish.
Some of the SOTA models like gemini 3 pro are getting quite good at ballpark/estimations. I have fed it multiple complex formulas from my studies and some values. The end result is often quite close and similar in accuracy how I would do an estimation myself. (It is usually more accurate then my own ones.)
Now I don’t argue there is any consciousness or magic going on.
But I think the generalization that is going on is quite something! I have trained ai models for various robot control and computer vision tasks. Compared to older machine learning approaches transformers are very impressive, computationally accessible and easy to use. (In my limited experience)
I find it okay for writing programs since you can verify it to see if the output is correct.
But, actual analysis not so much, since when verifying what comes out that its not completely reliable even for things it should be like numbers. Now numbers might be close, but still off
Abstract stuff might be fine. But, its still not something to entirely trust on analysis because of errors. There’s a lot of double checking that needs to be going on.
If implemented, that would just ban chatbots that use large language models. It’s not a terrible idea.
What would actually happen is that so-called AI chatbot systems would try to detect if someone is from New York and then try to exclude them from receiving medical or legal advice, fail, and then get sued and then pay a small fine, over and over again forever.
This is a really bad idea.
First because healthcare is clearly being gatekept from people.
Second, because even if you go to a healthcare professional nowadays, there is no guarantee that that person is not a fucking idiot that doesn’t believe in vaccines. I can’t believe I have to actually ask people before they touch me if they believe in vaccines or not and then tell them to not come back into my room if they answer that they don’t believe in science. But that has happened and it has happened to the people I’ve taken care of and because of this now healthcare can’t be trusted.
The LLM is not any worse than that. In fact, I would say that it’s already too cautious. No way the model is ever going to tell me vaccines are bad. It’s not going to tell me to take a poison to clear Covid. It’s not going to tell me to drink bleach like the president did. It’s literally not any worse than the bullshit we are dealing with all day every fucking day.
And I’m getting to the point that if you’re a full grown human fucking being and you’re going to believe something if it tells you to drink fucking bleach or swallow a fucking lightbulb then that’s nature saying something about you.
Naw, completely disagree. If you had a calculator you knew was defective you would ban doctors and lawyers from using it.
You also seem to think that LLM is going to be inherently more accurate than a expert human. We can see with GrokAI how easy it is to manipulate an AI into saying racist white nationalist garbage. So we are not just trusting the technology but also a layer of unpredictable corporate meddling.
Why does the LLM recommend this drug but not the other one? We quickly see how a corporation could favor a certain medication due to behind the scene deals or even push a medication.
You can’t trust a black box you are not allowed to look into. Trust in a LLM at this point is pure folly.
Funny thing is LLMs are bad as calculators too, since I’ve seen it get simple multiplication wrong.
It’s capable of generating content, but unable to verify or know itself if it is correct. But, lot of people don’t realize that because the less they know about a subject matter the smarter it will seem to them not knowing its well…a language model. As in just outputting what can be complete gibberish.
Some of the SOTA models like gemini 3 pro are getting quite good at ballpark/estimations. I have fed it multiple complex formulas from my studies and some values. The end result is often quite close and similar in accuracy how I would do an estimation myself. (It is usually more accurate then my own ones.)
Now I don’t argue there is any consciousness or magic going on. But I think the generalization that is going on is quite something! I have trained ai models for various robot control and computer vision tasks. Compared to older machine learning approaches transformers are very impressive, computationally accessible and easy to use. (In my limited experience)
I find it okay for writing programs since you can verify it to see if the output is correct.
But, actual analysis not so much, since when verifying what comes out that its not completely reliable even for things it should be like numbers. Now numbers might be close, but still off
Abstract stuff might be fine. But, its still not something to entirely trust on analysis because of errors. There’s a lot of double checking that needs to be going on.
100% fact.