I’m pulling the “twitter is a microblog” rule even though twitter is pretty mega now, hope that’s ok.

  • Kptkrunch@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    I know this sounds great to most people but it demonstrates a very superficial level of thinking… I mean for sure an LLM is capable of asking questions, and if you set it up with real time “sensory” input it could generate constant reaction to that input… much in the way you are constantly being stimulated to react to your environment… I am not really sure what the distinction is between a biological brain and a predictive model or algorithm… I would ask you what you think your own brain is doing on a fundamental level.

    • GuyIncognito@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      Fuck if I know, but seems to me that intelligence is more than just reacting to stimulus. The problem is we’ve broken the Turing test. We’ve made a computer that can sound sentient, but clearly isn’t.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      I would actually argue that it is the most important question.

      Surely the most relevant test of any intelligence is whether or not itself starting. Any classical description of an artificial general intelligence would surely require the thing to actually do work on its own. If an intelligence is of greater than human intellect but it has to be prompted in order to do anything, then it’s always going to be limited by what a human can think to prompt for.

      • Kptkrunch@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I think you are describing some notion of a “will” or motive… but also potentially describing an LLM’s lack of temporal experience. I would argue that a human is constantly being “prompted” to react to things happening to them via sensory input. And adding that to an LLM is trivial. (Provided the input is of a modality that it can understand like text or image embeddings).

        As far as will or motive to perform tasks goes, some think an AI agent could generate secondary sub-goals like a will to “survive” in order to carry out primary tasks like “make paperclips efficiently”. This is called instrumental convergence and its speculative. I think what would really be scary is if someone explicitly optimized a model with billions of parameters to survive or carry out some specific task and they utilized online reinforcement learning. I dont think there is a big technical hurdle there… you could imagine a sort of adversarial style training where one model predicts damage/danger/threats and the other attempts to avoid those. We could propagate rewards and punishment back over the sequence of actions that led to that state and train as the model is interacting with its environment.