Like if I type “I have two appl…” for example, often it will suggest “apple” singular instead of plural. Just a small example, but it is really bad at predicting which variant of a word should come after the previous

    • Kichae@lemmy.ca
      link
      fedilink
      arrow-up
      4
      arrow-down
      4
      ·
      11 months ago

      The algorithms are the same. The models are different, being trained on a smaller data set.

      • FooBarrington@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        11 months ago

        No, the algorithms are not the same. Phones don’t use transformer models for text prediction, they use Markov chain-based approaches. Also, retraining of transformer models for individualized completion would be too expensive, whereas it’s basically free with Markov approaches. Where do you get these ideas?

  • Lmaydev@programming.dev
    link
    fedilink
    arrow-up
    31
    ·
    edit-2
    11 months ago

    AI is a vast field. LLMs and neural networks are a small part of it.

    LLMs are very expensive to run and a lot more complex than the markov chains often used for predictive text.

    Predictive text just chooses a likely word based on what’s typed. This may be as simple as looking for words that start with what you’ve typed.

    LLMs vectorise words and understand the complex relationship between vectors using many data points. So it would spot the word “two” and realise that plurals are used with it.

    • Dr Cog@mander.xyz
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      Predictive text also can vectorize words, but the number of vectors per word are much, much simpler.

  • asterfield@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    11 months ago

    LLMs like chatgpt take a wild amount of resources to run.

    If you want something as smart as gpt3 and you want it to run at typing speeds, you’ll need a gaming PC running it.

    People just recently managed to run gpt3 strength models at all on ordinary laptop hardware (slowly).

    There is currently no way to run something gpt4 strength on ordinary consumer hardware (I’m just guessing but I think it takes a few hundred gb of VRAM to run)

  • sir_reginald@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    11 months ago

    LLMs are orders of magnitude more sophisticated and expensive to run. But don’t worry, I’m sure not so far in the future we will see smaller LLMs being run on device to be used as autocorrect.

    • pacoboyd@lemm.ee
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      It would have to be pretty specific and small to work on a phone and I think a side effect would be everyone’s conversations start to sound a lot more homogeneous.

      • sir_reginald@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        you’re not wrong. Google just announced Gemini Nano that will run directly on the Pixel 8. Of course, it’s the first of it’s kind and will probably be slow and it’s not used as autocorrect yet. But just give it one year or two and it will probably be more common.

    • Mr_Blott@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      11 months ago

      Can we have Scottish ones that know what a bawbag is, and when to put an “e” on the end of “shit”?

      Thanks!

      • OpenStars@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Think of it from the LLM’s perspective - in the general pool you have common English, you have less common variations such as this, and then you have whatever the heck people like Kid Rock are doing…

        Bawitdaba, da bang, da dang diggy diggy
        Diggy, said the boogie, said up jump the boogie

  • CyanFen@lemmy.one
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    11 months ago

    Because they’re using different tech. That’s like asking why do phone calls sound bad compared to voip calls. They’re just using different tech.

    • Candelestine@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      edit-2
      11 months ago

      Lawnmowers can’t keep up with Ferraris either, despite both being vehicles.

      edit for wording