• filister@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    37
    ·
    2 days ago

    That’s a very toxic attitude.

    Inference is in principle the process of generation of the AI response. So when you run locally and LLM you are using your GPU only for inference.

    • aaron@lemm.ee
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      6
      ·
      2 days ago

      Yeah, I misread because I’m stupid. Thanks for replying, non-toxic man.