tacking on a bunch of LLMs sure is a way to “make the web more human”.

    • averyminya@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      11 days ago

      It’s funny, I’ve been thinking a lot about people’s acknowledgement of faults or shortcomings and choosing to ignore them, whether it’s because they agree, don’t care, or think it doesn’t matter. Or don’t agree and there’s no better alternative, or it’s the least bad alternative. I dunno.

      In the public internet spaces like Facebook, discord, the others, I’ve been seeing a lot of this happening recently with Linkin Park’s new singer. Some are happy and ignorant, some know and don’t care, some know and are saddened. There is a lot of vitrol between the people who know and are saddened and the people who don’t know/don’t care. This is just one example from this week, but it happens every week to every story. It can be, probably, literally applied to anything. People’s level of information heavily biases them from their predisposed beliefs (as in, if they already have an opinion, chances are that the opinion will not change when presented with new information).

      In our spaces I see it with Brave. I see it with Kagi. We all saw it with Unity en masse and something actually happened about that, but even so people are still using Unity today, albeit I would guess out of necessity, or now ignorance since time has passed (not saying ignorance here is a fault). Before then we saw it with Audacity. Can’t forget Reddit, where a significant chunk of users are now participating here instead. And… yet… Reddit still exists, nearly in full.

      It’s such a crazy phenomena with how opinions are formed from emotional judgements based on the level of information they have, and due to our current state of informational sharing there are microcosms of willful ignorance. And some aren’t ignorant, it just doesn’t matter to them.

  • lenninscjay@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 days ago

    I’ve used some of these features when I’m trying to skim many articles for my grad school work. It’s not terrible.

    There is a use case for this stuff. Especially in a search engine.

    Short of hosting your own LLM, Kagi is one of the few I’d hope can get it right and respect privacy. (So far unverified on the AI side tho)

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        6
        ·
        11 days ago

        It’s often not a choice between an AI-generated summary and a human-generated one, though. It’s a choice between an AI-generated summary and no summary.

        • noodlejetski@lemm.eeOP
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          11 days ago

          so, no summary at all, or one that does shit job pointing out important bits or gets them wrong and therefore isn’t a proper summary? choices, choices.

      • Cenotaph@mander.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 days ago

        Kagi actually does an interesting implementation for their search summary and while not perfect, it is miles better than the alternatives in my experience. It uses a combination of anthropic’s claude for language processing as well as incorporates wolfram alpha for stuff that needs numerical accuracy. Compared to google AI or copilot I’ve been seeing good results.

        While it isn’t perfect at summarizing, I’ve found their implementation to be “good enough”, and it can summarize pieces near instantly, which I think is the place where it actually becomes useful. Humans may be better, but I dont have the money or time to pay a human to summarize pages for me to see if they’re going to be useful to delve further into.

  • Mesa@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    8 days ago

    Hot take: the web should not be more human.

    And I’m pretty progressive on technological matters. There should still be a clear separation, though.