• 0 Posts
  • 38 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle

  • While I’m sure the obvious systemic issues contribute to not looking for alternatives, that does sound like largely an issue inherent to optical pulse oximeters. Engineers aren’t miracle workers, they can’t change physics to their liking.

    I’m sure pulse oximeters now are more accurate than they were 20 years ago. The fact we’re still using them is because no alternatives have been found which are as easy to use, reliable, and non-invasive as pulse oximeters, even with the known downsides.



  • Yes, you’re anthropomorphizing far too much. An LLM can’t understand, or recall (in the common sense of the word, i.e. have a memory), and is not aware.

    Those are all things that intelligent, thinking things do. LLMs are none of that. They are a giant black box of math that predicts text. It doesn’t even understand what a word is, orthe meaning of anything it vomits out. All it knows is what is the statistically most likely text to come next, with a little randomization to add “creativity”.





  • Eranziel@lemmy.worldtoTechnology@lemmy.worldThe GPT Era Is Already Ending
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    7 months ago

    This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI’s CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.

    I’m not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.