“I’ve been saving for months to get the Corsair Dominator 64GB CL30 kit,” one beleagured PC builder wrote on Reddit. “It was about $280 when I looked,” said u/RaidriarT, “Fast forward today on PCPartPicker, they want $547 for the same kit? A nearly 100% increase in a couple months?”

  • Passerby6497@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    22 hours ago

    I for one would enjoy triggering your unskippable cutscenes in setting up local CPU based AI if it can work on Linux with an older amd card.

    Don’t have funds for anything fancy, but would be interesting in playing around with it. Been wanting to get something like that setup for home assistant.

    • SabinStargem@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      If you just want an easy way to setup AI on Windows or Linux, KoboldCPP is my recommendation for your backend. It supports the GGUF format, which allows you to use both RAM and VRAM simultaneously. It won’t be the fastest thing, but it is easy enough to setup, with a bundled GUI for prep and actual usage. Through the IP address it gives, you can hook the backend into a frontend of choice.

      KoboldCPP

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      22 hours ago

      Plenty of folks do AMD. A popular homelabsetup is 32GB AMD MI50 GPUs, which are quite cheap on eBay. Even Intel is fine these days!

      But what’s your setup, precisely? CPU, RAM, and GPU.

      • afk_strats@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        21 hours ago

        I have a MI50/7900xtx gaming/ai setup at homr which in i use for learning and to test out different models. Happy to answer questions

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        20 hours ago

        The key is which model, and how.

        For the really sparse MoEs, you might be better off trying ik_llama.cpp, especially if you are targeting a ‘small’ quant. But the dense Gemma models (as good as they are) are probably not the best choice for 8G RAM these days.