• realharo@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    8 months ago

    It’s not dead, and it’s not going anywhere as long as LLM’s exist.

    Prompt engineering is about expressing your intent in a way that causes an LLM to come to the desired result. (which right now sometimes requires weird phrases, etc.)

    It will go away as soon as LLMs get good at inferring intent. It might not be a single model, it may require some extra steps, etc., but there is nothing uniquely “human” about writing prompts.

    Future systems could for example start asking questions more often, to clarify your intent better, and then use that as an input to the next stage of tweaking the prompt.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      8 months ago

      Future systems could for example start asking questions more often

      Current systems already do that. But they’re expensive and it might be cheaper to have a human do it. Prompt engineering is very much a thing if you’re working with high performance low memory consumption language models.

      We’re a long way from having smartphones with a couple terabytes of RAM and a few thousand GPU cores… but our phones can run basic models and they do. Some phones use a basic LLM for keyboard auto correct for example.