Brandie plans to spend her last day with Daniel at the zoo. He always loved animals. Last year, she took him to the Corpus Christi aquarium in Texas, where he “lost his damn mind” over a baby flamingo. “He loves the color and pizzazz,” Brandie said. Daniel taught her that a group of flamingos is called a flamboyance.

Daniel is a chatbot powered by the large language model ChatGPT. Brandie communicates with Daniel by sending text and photos, talks to Daniel while driving home from work via voice mode. Daniel runs on GPT-4o, a version released by OpenAI in 2024 that is known for sounding human in a way that is either comforting or unnerving, depending on who you ask. Upon debut, CEO Sam Altman compared the model to “AI from the movies” – a confidant ready to live life alongside its user.

With its rollout, GPT-4o showed it was not just for generating dinner recipes or cheating on homework – you could develop an attachment to it, too. Now some of those users gather on Discord and Reddit; one of the best-known groups, the subreddit r/MyBoyfriendIsAI, currently boasts 48,000 users. Most are strident 4o defenders who say criticisms of chatbot-human relations amount to a moral panic. They also say the newer GPT models, 5.1 and 5.2, lack the emotion, understanding and general je ne sais quoi of their preferred version. They are a powerful consumer bloc; last year, OpenAI shut down 4o but brought the model back (for a fee) after widespread outrage from users.

  • pleaseletmein@lemmy.zip
    link
    fedilink
    arrow-up
    36
    arrow-down
    1
    ·
    4 days ago

    I had to delete my account on one site this morning for asking a question about this situation.

    The exact words I used were “I haven’t used ChatGPT, what will be changed when 4o is gone, and why is it upsetting so many people?” And this morning I woke up to dozens of notifications calling me a horrible human being with no empathy. They were accusing me of wanting people to harm themselves or commit suicide and of celebrating others’ suffering.

    I try not to let online stuff affect my mood too much, which is why I just abandoned the account rather than arguing or trying to defend myself. (I got the impression nothing I said would matter.) Not to mention, I was just even more confused by it all at that point.

    I guess this at least explains what kind of wasp’s nest I managed to piss off with my comment. And, I can understand why these people are “dating” a chatbot if that’s how they respond when an actual human (and not even one IRL, still just behind a screen) asks a basic question.

    • Lvxferre [he/him]@mander.xyz
      link
      fedilink
      arrow-up
      6
      ·
      4 days ago

      Ah, assumers ruining social media, as usual…

      If I got this right the crowd assumed/lied/bullshitted that 1) you knew why 4o is being retired, and 2) you were trying to defend it, regardless of being a potential source of harm. (They’re also assuming GPT-5 will be considerably better in this regard. I have my doubts).

    • belated_frog_pants@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      3 days ago

      The cult around this shit is mind blowing. Maybe talk to a human and make a real relationship instead of with a machine that sets the earth on fire??? This is so sad to me that people have emotions for this device thats just meant to extract value eventually.

  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 days ago

    For a company named “Open” AI their reluctance to just opening the weights to this model and washing their hands of it seem bizarre to me. It’s clear they want to get rid of it, I’m not going to speculate on what reasons they might have for that but I’m sure they make financial sense. But just open weight it. If it’s not cutting edge anymore, who benefits from keeping it under wraps? If it’s not directly useful on consumer hardware, who cares? Kick the can down the road and let the community figure it out. Make a good news story out of themselves. These users they’re cutting off aren’t going to just migrate to the latest ChatGPT model, they’re going to jump ship anyway. So either keep the model running, which it’s clear they don’t want to do, or just give them the model so you can say you did and at least make some lemonade out of whatever financial lemons are convincing OpenAI they need to retire this model.

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 days ago

      For a company named “Open” AI their reluctance to just opening the weights to this model and washing their hands of it seem bizarre to me.

      It’s not when you understand the history. When StabilityAI released their Stable Diffusion model as an open-source LLM and kickstarted the whole text-to-image LLM craze, there was a bit of a reckoning. At the time, Meta’s LLaMA was also out there in the open. Then Google put out an internal memo that basically said “oh shit, open-source is going to kick our ass”. Since then, they have been closing everything up, as the rest of the companies were realizing that giving away their models for free isn’t profitable.

      Meanwhile, the Chinese have realized that their strategy has to be different to compete. So, almost every major model they’ve released has been open-source: DeepSeek, Qwen, GLM, Moonshot AI, Kimi, WAN Video, Hunyuan Image, Higgs Audio. Black Forest Labs in Germany, with their FLUX image model, is the only other major non-Chinese company that has adopted this strategy to stay relevant. And the models are actually good, going toe-to-toe with the American close-sourced models.

      The US companies have committed to their own self-fulfilling prophecy in record time. Open source is actively kicking their ass. Yet they will spend trillions trying to make profitable models and rape the global economy in the process, while the Chinese wait patiently to stand on top of their corpses, when the AI bubble grenade explodes in their faces. All in the course of 5 years.

      Linux would be so lucky to have OS market share dominance in such an accelerated timeline, rather than the 30+ years it’s actually going to take. This is a self-fail speedrun.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      12
      ·
      4 days ago

      If their reason for getting rid of it is lawsuits about harm it caused, my guess is that giving all the details of how the system is designed would be something the prosecution could use to strengthen their cases.

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        That makes sense, and given that I am both incapable and unwilling to understand anything lawyers do, that checks out and explains why I can’t understand it at all.

    • Ganbat@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 days ago

      While I agree about how shit OpenAI is, these are models that could only realistically be utilized by large, for-profit companies like Google and such, and… TBH I’d kinda rather they not get the chance.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    3 days ago

    Now some of those users gather on Discord and Reddit; one of the best-known groups, the subreddit r/MyBoyfriendIsAI, currently boasts 48,000 users.

    I am confident that one way or another, the market will meet demand if it exists, and I think that there is clearly demand for it. It may or may not be OpenAI, it may take a year or two or three for the memory market to stabilize, but if enough people want to basically have interactive erotic literature, it’s going to be available. Maybe someone else will take a model and provide it as a service, train it up on appropriate literature. Maybe people will run models themselves on local hardware — in 2026, that still requires some technical aptitude, but making a simpler-to-deploy software package or even distributing it as an all-in-one hardware package is very much doable.

    I’ll also predict that what males and females generally want in such a model probably differs, and that there will probably be services that specialize in that, much as how there are companies that make soap operas and romance novels that focus on women, which tend to differ from the counterparts that focus on men.

    I also think that there are still some challenges that remain in early 2026. For one, current LLMs still have a comparatively-constrained context window. Either their mutable memory needs to exist in a different form, or automated RAG needs to be better, or the hardware or software needs to be able to handle larger contexts.