• pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 years ago

    Im curious to see what sorts of recommended minimum specs there will be for these features. It is my understanding that these sorts of models require a non negligible amount of horsepower to run in a timely manner.

    At the moment I am running Nextcloud on some raspberry pis and, my gut tells me I might need a bit more oomph than that to handle this sort of real time AI prompting >_>;

    • Brownian Motion@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 years ago

      the AI that nextcloud is offering uses openAI, sign up get a api key and add it. Your ai requests goto the cloud. (and i couldnt get it to work, constant " too many request" or a straight “failed”)

      The other option is the addon " local llm", you download a cutdown llm like llama2 or falcon and it runs locally. I did get thoes all installed, but it didnt work for general prompts.

      Nextcloud will probably fix things over time, and the developer who made the local llm plugin will to, but right now this isnt very useful to selfhosters.

        • Brownian Motion@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          I just asked it to write an assembly program for the Intel 8008 uprocessor, and it just knocked it out! That’s not bad for a chip that was released in 1972 !

    • Lupec@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 years ago

      Yeah, I’m wondering the same and also figure the requirements will be pretty significant. Still, pretty happy with things like this and Home Assistant’s recent work on local voice assistants.