• KSP Atlas@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 months ago

    After getting my head around the basics of the way LLMs work I thought “people rely on this for information?”, the model seems ok for tasks like summarisation though

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.

      Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.

      In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      the model seems ok for tasks like summarisation though

      That and retrieval and the business use cases so far, but even then only if the results can be wrong somewhat frequently.