i built a pc that has a crop ton of processing power, but i know nothing about the software side of things/

thoughts? prayers? concerns? comments? @$%&'s to give?

  • 0x01@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    3 days ago

    Processing (cpu) doesn’t really matter as much as gpu, and generally the constraint is gpu memory on consumer grade machines. Processing via nvidia chips has become the standard, which is a huge part of why they have become the single most valuable company on the planet, though you can use cpu you’ll find the performance almost unbearably slow.

    Ollama is the easiest option, but you can also use option and pytorch (executorch), vllm, etc

    You can download your model through huggingface or sometimes directly from the lab’s website

    It’s worth learning the technical side but ollama genuinely does an excellent job and takes a ton off your plate

  • Disregard3145@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 days ago

    What do you mean by “make” what do you want it to do that you aren’t getting.

    Maybe some existing model via ollama - llama-uncensored?

    Do you need to add context with some specific set of data, should it be retrieval based or tuned or cross trained?

    Does it even need to be an llm? What are you trying to actually achieve?

    • bobbyguy@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      3 days ago

      i want to make my own chatbot that can also act without my input, be able to create emails, and do online jobs, and make its own decisions, things like that