• pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    5
    ·
    7 months ago

    This sort of ignores the fact that the advances in that technology are widespread applicable to all tasks, we literally just started with text and image generation because:

    1. The training data is plentiful abd basically free to get your hands on

    2. It’s easy to verify it works

    LLMs will crawl so that ship breaking robots can run.

    • sturlabragason@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      6
      ·
      7 months ago

      Second this.

      We’re in the first days and everyday I add a new model or tech to my reading list. We’re close to talking to our CPUs. We’re building these stacks. We’re solving the memory problems. Don’t need RAG with a million tokens, guerrilla model can talk with APIs, most models are great at python which is versatile as fuck, I can see the singularity on the horizon.

      Try Ollama if you want to test things yourself.

      Use GPT4 if you want to get an inkling of the potential that’s coming. I mean really use it.