• nickwitha_k (he/him)@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 day ago

    I’d argue that showing disdain, aggression, and disrespect in communication with AI/LLM things is more likely to be dangerous as one is conditioning themselves to be disdainful, aggressive, and disrespectful when communicating with the same methods used to communicate with other people. Our brains do a great job at association, so, it’s basically just training oneself to be an asshole.

    • thisbenzingring@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      why are you arguing that at me? I just argued that its not a human, AI is a tool and should be treated as such. If my tool sucks, I will tell it so and quit using it. If my tool is great, I will use it to the best of my ability and respect its functionality.

      everyone else here is making scarecrow arguments because I just don’t think it needs to be anthropomorphized. The link speaks about “tens of millions of dollars” wasted on computing please and thank you

      that is fucking stupid behavior

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        why are you arguing that at me?

        Rationally and in vacuum, anthropomorphizing tools and animals is kinda silly and sometimes dangerous. But human brains don’t work do well at context separation and rationality. They are very noisy and prone to conceptual cross-talk.

        The reason that this is important is that, as useless as LLMs are at nearly everything they are billed as, they are really good at fooling our brains into thinking that they possess consciousness (there’s plenty even on Lemmy that ascribe levels of intelligence to them that are impossible with the technology). Just like knowledge and awareness don’t grant immunity to propaganda, our unconscious processes will do their own thing. Humans are social animals and our brains are adapted to act as such, resulting in behaviors that run the gamut from wonderfully bizzare (keeping pets that don’t “work”) to dangerous (attempting to pet bears or keep chimps as “family”).

        Things that are perceived by our brains, consciously or unconsciously, are stored with associations to other similar things. So the danger here that I was trying to highlight is that being abusive to a tool, like an LLM, that can trick our brains into associating it with conscious beings, is that that acceptability of abusive behavior towards other people can be indirectly reinforced.

        Basically, like I said before, one can unintentionally train themselves into practicing antisocial behaviors.

        You do have a good point though that people believing that ChatGPT is a being that they can confide in, etc is very harmful and, itself, likely to lead to antisocial behaviors.

        that is fucking stupid behavior

        It is human behavior. Humans are irrational as fuck, even the most rational of us. It’s best to plan accordingly.

      • CileTheSane@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        If my tool sucks, I will tell it so

        So thanking your tools: dangerous on a humanity level scale

        Telling your tool it sucks: Normal behaviour

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Exactly!

        I’m a parent, and I set a good example by being incredibly respectful to people, whether it’s the cashier at the grocery store, their teacher at school, or a police officer. I show the same respect because I’m talking to a person.

        When I’m talking to a machine, I’m direct without any respect because the goal is to clearly indicate intent. “Alexa play <song>” or “Hey Google, what’s <query>?” They’re tools, and there is zero value in being polite to a machine, it just adds more chances for the machine to misinterpret me.

        Kids are capable of understanding that you act differently in different situations. They’re super respectful to their teachers, they don’t bother with that w/ their peers, and us as parents are somewhere in between. I don’t want my kids to associate AI/LLMs more with their teachers than their pencils. They’re tools, and their purpose is to be used efficiently.