• HappyTimeHarry@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    2 days ago

    Do they not know it works offline too?

    I noticed chatgpt today being pretty slow compared to the local deepseek I have running which is pretty sad since my computer is about a bajillion times less powerful

    • Rogue@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Is it possible to download it without first signing up to their website?

        • Rogue@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 days ago

          Thanks

          Any recommendations for communities to learn more?

          Frustratingly Their setup guide is terrible. Eventually managed to get it running. Downloaded a model and only after it download did it inform me I didn’t have enough RAM to run it. Something it could have known before the slow download process. Then discovered my GPU isn’t supported. And running it on a CPU is painfully slow. I’m using an AMD 6700 XT and the minimum listed is 6800 https://github.com/ollama/ollama/blob/main/docs/gpu.md#amd-radeon

            • Rogue@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 day ago

              Thanks, I did get both setup with Docker, my frustration was neither ollama or open-webui included instructions on how to setup both together.

              In my opinion setup instructions should guide you to a usable setup. It’s a missed opportunity not to include a docker-compose.yml connecting the two. Is anyone really using ollama without a UI?