• stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    2
    ·
    13 hours ago

    They don’t have the capabillity to “admit” to anything.

    You are falling into the same trap as the guy who had his development project deleted by an AI despite having had it “promise” not to do that.

    The AI we use today don’t have the understanding of “admitting” or “promising”, to them, these are just words, with no underlying concept.

    Please stop treating AI’s as if they are human, they are absolutely not.

    • Earthman_Jim@lemmy.zip
      link
      fedilink
      arrow-up
      17
      ·
      edit-2
      12 hours ago

      People really need to understand that it’s just very complex predictive text amounting to a Rorschach test.

    • criss_cross@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      10 hours ago

      It’s the same trap that execs fall into when thinking they can replace humans with AI

      Gen AI doesn’t “think” for itself. All the models do is answer “given the text X in front of me what’s the most probable response to spit out”. It has no concept of memory or anything like that. Even chat convos are a bit of a hack as all that happens is that all the text in the convo up until that point is thrown in as X. It’s why chat window limits exist in LLMs, because there’s a limit of how much text they can throw in for X before the model shits itself.

      That doesn’t mean they can’t be useful. They are semi decent at taking human input and translating it to programmatic calls (which up until that point you had to work with clunky NLP libraries to do). They are also okay at summarizing info.

      But the chat bot and garbage hype around them has people convinced that these are things they’re not. And every company is starting to learn the hard way there’s hard limits to what these things can do.