I have great success with local LLM in some of my workflows and automation.
I use it for my line completion and basic functions/asks while developing that I dont want to waste tokens on.
I also use it in automation. I run my own media server with a few dozen people with an automated request system “jellyseerr” that adds content. I have automation that leverages local LLM to look at recent media requests and automatically requests content that is similar to it.
Local AI can be useful. But I would rather see nice implementations that used small but brilliantly tuned models for… let’s say better predictive text… it’s already somewhat AI based I just would like it to be. Better
A local LLM is still an LLM… I don’t think it’s gonna be terribly useful no matter how good your hardware is
I have great success with local LLM in some of my workflows and automation.
I use it for my line completion and basic functions/asks while developing that I dont want to waste tokens on.
I also use it in automation. I run my own media server with a few dozen people with an automated request system “jellyseerr” that adds content. I have automation that leverages local LLM to look at recent media requests and automatically requests content that is similar to it.
Local AI can be useful. But I would rather see nice implementations that used small but brilliantly tuned models for… let’s say better predictive text… it’s already somewhat AI based I just would like it to be. Better