• lmuel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    9 days ago

    A local LLM is still an LLM… I don’t think it’s gonna be terribly useful no matter how good your hardware is

    • maus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      9 days ago

      I have great success with local LLM in some of my workflows and automation.

      I use it for my line completion and basic functions/asks while developing that I dont want to waste tokens on.

      I also use it in automation. I run my own media server with a few dozen people with an automated request system “jellyseerr” that adds content. I have automation that leverages local LLM to look at recent media requests and automatically requests content that is similar to it.

    • luridness@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 days ago

      Local AI can be useful. But I would rather see nice implementations that used small but brilliantly tuned models for… let’s say better predictive text… it’s already somewhat AI based I just would like it to be. Better