• 138 Posts
  • 472 Comments
Joined 2 年前
cake
Cake day: 2024年3月19日

help-circle







  • I guess, but “regularly” is hard to prove in court, or at least it was before 2025. Also before 2025, something would have to happen for you to be investigated for that in the first place. I’m sure now they’ll just make up a reason to investigate pot smokers.

    I do wonder how it would go over in court now. In a jury trial, the prosecution would likely still have to prove that you “regularly” smoke pot, right?

    I suppose my point is that it probably won’t be very effective in stopping pot smokers from owning guns (especially those that already own guns) if it’s just a yes/no on a form.













  • Then why do you bring up code reviews and 500 lines of code? We were not talking about your “simulations” or whatever else you bring up here. We’re talking about you saying it can create 500 lines of code, and that it’s okay to ship it if it “just works” and have someone review your slop.

    I have no idea what you’re trying to say with your first paragraph. Are you trying to say it’s impossible for it to coincidentally get a correct result? Because that’s literally all it can do. LLMs do not think, they do not reason, they do not understand. They are not capable of that. They are literally hallucinating all of the time, because that’s how they work. That’s why OpenAI had to admit that they are unable to stop hallucinations, because it’s impossible given that’s how LLMs work.