• 0 Posts
  • 870 Comments
Joined 2 years ago
cake
Cake day: October 6th, 2023

help-circle
  • I have been studying AI for ten+ years

    Really? Cool! What’s your take on people deliberately muddying the waters by conflating LLMs with other forms of AI like interpretative models?

    And especially keeping in mind that the latter are a roaring success, while the former is probably the single most expensive waste of time, money and effort known to humanity.














  • Workplace safety is quickly turning from a factual and risk-based field into a vibes-based field, and that’s a bad thing for 95% of real-world risks.

    To elaborate a bit: the current trend in safety is “Safety Culture”, meaning “Getting Betty to tell Alex that they should actually wear that helmet and not just carry it around”. And at that level, that’s a great thing. On-the-ground compliance is one of the hardest things to actually implement.

    But that training is taking the place of actual, risk-based training. It’s all well and good that you feel comfortable talking about safety, but if you don’t know what you’re talking about, you’re not actually making things more safe. This is also a form of training that’s completely useless at any level above the worksite. You can’t make management-level choices based on feeling comfortable, you need to actually know some stuff.

    I’ve run into numerous issues where people feel safe when they’re not, and feel at risk when they’re safe. Safety Culture is absolutely important, and feeling safe to talk about your problems is a good thing. But that should come AFTER being actually able to spot problems.



  • I’m a bit more pessimistic. I fear that that LLM-pushers calling their bullshit-generators “AI” is going to drag other applications with it. Because I’m pretty sure that when LLM’s all collapse in a heap of unprofitable e-waste and takes most of the stockmarket with it, the funding and capital for the rest of AI is going to die right along with LLMs.

    And there are lots of useful AI applications in every scientific field, data interpretation with AI is extremely useful, and I’m very afraid it’s going to suffer from OpenAI’s death.