• FinishingDutch@lemmy.world
    link
    fedilink
    arrow-up
    56
    arrow-down
    4
    ·
    edit-2
    5 days ago

    Ugh. Don’t get me started.

    Most people don’t understand that the only thing it does is ‘put words together that usually go together’. It doesn’t know if something is right or wrong, just if it ‘sounds right’.

    Now, if you throw in enough data, it’ll kinda sorta make sense with what it writes. But as soon as you try to verify the things it writes, it falls apart.

    I once asked it to write a small article with a bit of history about my city and five interesting things to visit. In the history bit, it confused two people with similar names who lived 200 years apart. In the ‘things to visit’, it listed two museums by name that are hundreds of miles away. It invented another museum that does not exist. It also happily tells you to visit our Olympic stadium. While we do have a stadium, I can assure you we never hosted the Olympics. I’d remember that, as i’m older than said stadium.

    The scary bit is: what it wrote was lovely. If you read it, you’d want to visit for sure. You’d have no clue that it was wholly wrong, because it sounds so confident.

    AI has its uses. I’ve used it to rewrite a text that I already had and it does fine with tasks like that. Because you give it the correct info to work with.

    Use the tool appropriately and it’s handy. Use it inappropriately and it’s a fucking menace to society.

    • JackFrostNCola@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      4 days ago

      I know this is off topic, but every time i see you comment of a thread all i can see is the pepsi logo (i use the sync app for reference)

    • ILikeBoobies@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      5 days ago

      I gave it a math problem to illustrate this and it got it wrong

      If it can’t do that imagine adding nuance

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      4 days ago

      Wait, when did you do this? I just tried this for my town and researched each aspect to confirm myself. It was all correct. It talked about the natives that once lived here, how the land was taken by Mexico, then granted to some dude in the 1800s. The local attractions were spot on and things I’ve never heard of. I’m…I’m actually shocked and I just learned a bunch of actual history I had no idea of in my town 🤯

      • FinishingDutch@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        I did that test late last year, and repeated it with another town this summer to see if it had improved. Granted, it made less mistakes - but still very annoying ones. Like placing a tourist info at a completely incorrect, non-existent address.

        I assume your result also depends a bit on what town you try. I doubt it has really been trained with information pertaining to a city of 160.000 inhabitants in the Netherlands. It should do better with the US I’d imagine.

        The problem is it doesn’t tell you it has knowledge gaps like that. Instead, it chooses to be confidently incorrect.

  • gandalf_der_12te@discuss.tchncs.de
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    4 days ago

    ChatGPT is a tool under development and it will definitely improve in the long term. There is no reason to shit on it like that.

    Instead, focus on the real problems: AI not being open-source, AI being under the control of a few monopolies, and there being little to none regulations that ensure it develops in a healthy direction.

    • I Cast Fist@programming.dev
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      4 days ago

      it will definitely improve in the long term.

      Citation needed

      There is no reason to shit on it like that.

      Right now there is, because of how wrong it and other AIs can be, with the average person using the first answer as correct without double checking

  • Takumidesh@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    2
    ·
    5 days ago

    GPTs natural language processing is extremely helpful for simple questions that have historically been difficult to Google because they aren’t a concise concept.

    The type of thing that is easy to ask but hard to create a search query for like tip of my tongue questions.

    • AstralPath@lemmy.ca
      link
      fedilink
      arrow-up
      27
      ·
      5 days ago

      Google used to be amazing at this. You could literally search “who dat guy dat paint dem melty clocks” and get the right answer immediately.

      • burgersc12@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        I mean tbf you can still search “who DAT guy” and it will give you Salvador Dali in one of those boxes that show up before the search results.

    • zarkanian@sh.itjust.works
      link
      fedilink
      arrow-up
      15
      ·
      5 days ago

      I’ve had people tell me “Of course, I’ll verify the info if it’s important”, which implies that if the question isn’t important, they’ll just accept whatever ChatGPT gives them. They don’t care whether the answer is correct or not; they just want an answer.

      • IronKrill@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        5 days ago

        That is a valid tactic for programming or how-to questions, provided you know not to unthinkingly drink bleach if it says to.

      • Leg@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 days ago

        Well yeah. I’m not gonna verify how many butts it takes to swarm mount everest, because that’s not worth my time. The robot’s answer is close enough to satisfy my curiosity.

        • Leg@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          4 days ago

          For the curious, I got two responses with different calculations and different answers as a result. So it could take anywhere from 1.5 to 7.5 billion butts to swarm mount everest. Again, I’m not checking the math because I got the answer I wanted.

  • Sculptus Poe@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 days ago

    I wonder where people can go. Wikipedia maybe. ChatGPT is better than google for answering most questions where getting the answer wrong won’t have catastrophic consequences. It is also a good place to get started in researching something. Unfortunately, most people don’t know how to assess the potential problems. Those people will also have trouble if they try googling the answer, as they will choose some biased information source if it’s a controversial topic, usually picking a source that matches their leaning. There aren’t too many great sources of information on the internet anymore, it’s all tainted by partisans or locked behind pay-walls. Even if you could get a free source for studies, many are weighted to favor whatever result the researcher wanted. It’s a pretty bleak world out there for good information.

  • ch00f@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    5 days ago

    Last night, we tried to use chatGPT to identify a book that my wife remembers from her childhood.

    It didn’t find the book, but instead gave us a title for a theoretical book that could be written that would match her description.

    • leverage@lemdro.id
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 days ago

      Same happens every time I’ve tried to use it for search. Will be radioactive for this type of thing until someone figures that out. Quite frustrating, if they spent as much time on determining the difference between when a user wants objective information with citations as they do determining if the response breaks content guidelines, we might actually have something useful. Instead, we get AI slop.

  • Mouselemming@sh.itjust.works
    link
    fedilink
    arrow-up
    12
    ·
    5 days ago

    How long until ChatGPT starts responding “It’s been generally agreed that the answer to your question is to just ask ChatGPT”?

    • AwkwardLookMonkeyPuppet@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 days ago

      I’m somewhat surprised that ChatGPT has never replied with “just Google it, bruh!” considering how often that answer appears in its data set.

  • TrickDacy@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    5 days ago

    Have they? Don’t think I’ve heard that once and I work with people who use chat gpt themselves

    • OsrsNeedsF2P@lemmy.ml
      link
      fedilink
      arrow-up
      8
      ·
      5 days ago

      Google intentionally made search worse, but even if they want to make it better again, there’s very little they can do. The web itself is extremely low signal:noise, and it’s almost impossible to write an algorithm that lets the signal shine through (while also giving any search results back)

  • Artyom@lemm.ee
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    5 days ago

    Reject proprietary LLMs, tell people to “just llama it”

      • Acters@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        4 days ago

        Top is proprietary llms vs bottom self hosted llms. Bothe end with you getting smacked in the face but one looks far cooler or smarter to do, while the other one is streamlined web app that gets you there in one step.

  • Creddit@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    5 days ago

    This is a story that’s been rotating through the media since ChatGPT first released.

    I have an unpopular opinion about this headline after seeing the media cycle repeatedly downplay/ignore what Alphabet has been doing in response to OpenAI: Google the search engine is not in direct competition with ChatGPT, but Gemini is, and Alphabet is smart to keep simpler/time-tested search functionality central to Google rather than react strongly and scrap the keyword-based search bar that users understand are comfortable using - especially older users, but I think most people are starting to discover they have a use for both search and LLM chats.

    I think there are two product categories here, which first looked like they were going to converge in 2022-2024, but which are now slowly changing course as customers start to comprehend how both are necessary for different purposes.

    When I make chats in ChatGPT or Gemini or Claude etc, I am starting to plan them longitudinally so that I can use them over and over for a specific project or query type.

    When I turn to a search bar, it’s because I really want a proxy for a specific website or between me and whatever weird site has the answer to my specific question. It’s not that I want discussion and a chat about it, I just want Google’s card-like results with a website index I can read instead of that website’s stylized, animated web design on top or popups or malware.

    Every time I get sucked into a chat with Bing CoPilot(ChatGPT) when I really only had a web search query, I regret wasting my time talking to the LLM. Almost as a reflex, I’ve started avoiding it for most things now.