I run my own LLM “AI” server at home because I want to try out various aspects and scenarios without having big tech snoop over my shoulder and be able to use any model I want.
I can perfectly well see people getting good, working therapy from an LLM. But I think it would depend on the user taking the LLM seriously, and anybody with sufficient experience with LLM’s simply don’t.
So the people this could help is the people that shouldn’t be allowed near an “AI” interface…
So the people this could help is the people that shouldn’t be allowed near an “AI” interface…
Let’s see what this LLM says when I run this question 20,000 times from a clean prompt, then compare it against the same question poised more directly run another 20,000 times. Then I can pick the answer I like better and run that against a different LLM and…
So what you’re saying is that this is NOT what I am supposed to do?
I run my own LLM “AI” server at home because I want to try out various aspects and scenarios without having big tech snoop over my shoulder and be able to use any model I want.
I can perfectly well see people getting good, working therapy from an LLM. But I think it would depend on the user taking the LLM seriously, and anybody with sufficient experience with LLM’s simply don’t.
So the people this could help is the people that shouldn’t be allowed near an “AI” interface…
Let’s see what this LLM says when I run this question 20,000 times from a clean prompt, then compare it against the same question poised more directly run another 20,000 times. Then I can pick the answer I like better and run that against a different LLM and…
So what you’re saying is that this is NOT what I am supposed to do?