• db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    138
    ·
    1 month ago

    Implying he gives a shit. The thing about people who lack any empathy is they’re immune to embarrassment even when they’re the most embarrassing human on the planet.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      10
      arrow-down
      52
      ·
      1 month ago

      This is nothing to do with “empathy” and everything to do with class…

      And yet normie pleb can’t understand the concept of a class war while literally getting fucked in the ass, no lube of course because that’s socialism

    • Lvxferre@mander.xyz
      link
      fedilink
      English
      arrow-up
      33
      ·
      1 month ago

      Even more accurately: it’s bullshit.

      “Lie” implies that the person knows the truth and is deliberately saying something that conflicts with it. However the sort of people who spread misinfo doesn’t really care about what’s true or false, they only care about what further reinforces their claims or not.

        • Lvxferre@mander.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 month ago

          Federation woes?

          Your comment has a different take though, and adding value to the discussion, it isn’t just the same as I said. Both are complementary.

          • OpenStars@piefed.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 month ago

            And this right here is why I like the Fediverse. Not immediately presuming the absolute worst case scenario and confidently asserting such, refusing to hear anything to the contrary? Offering kindness as well as accuracy in your answer? You didn’t go for the jugular in trying (even if failing) to “pwn” your victim!? You, sir, would make a very bad modern Redditor 🤪. Which is why I hope you stay here, where I can keep getting to read amazingly kind replies like these:-).

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 month ago

      The difference is, that with lies, you have to know it is untrue and say it anyway, where with misinformation, there is a possibility that the one telling it believes it is true.

      Well that is how I understand the word lying defined: Say something you know is not true in order to manipulate others.

      Or again different said: a lie is always misinformation, but misinformation is not always a lie.

      Hope that is understandable 😇

      • OpenStars@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        That is why I try to think now in terms of disinformation, more than merely misinformation, when it seems intentional.

  • MushuChupacabra@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    1
    ·
    1 month ago

    The ultra powerful see us as NPCs, and nothing more.

    Your anger is barely a pop up window on the game they’re playing.

  • andyortlieb@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    47
    ·
    1 month ago

    Chatbots can’t “admit” things. They regurgitate text that just happens to be information a lot of the time.

    That said, the irony is iron clad.

  • Zement@feddit.nl
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    1 month ago

    Well then they will have to train their Ai with incorrect informations… politically incorrect, scientifically incorrect, etc… which renders the outputs useless.

    Scientifically accurate and as close to the truth as possible never equals conservative talking points… because they are scientifically wrong.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      31
      ·
      1 month ago

      It would be the same with liberal talking points and in general any human talking point.

      Humans try to change the reality the way they want it, thus things they say are always incorrect. When they want to increase something, they make it appear less than IRL, usually. Also appearances are not universal.

      Humans also simplify things acceptably for one subject, but not for another.

      Humans also don’t know what “correct information” is.

      A lot of philosophy connected to language starts mattering, when your main approach to “AI” is text extrapolation.

      • Zement@feddit.nl
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        1 month ago

        Math is correct without humans. Pi is the same in the whole universe. There are scientific truths. And then there are the the flat earth, 2x2=1, qanon anti vax chematrail loonies, which in different degrees and colour are mostly united under the conservative “anti science” folks.

        And you want an Ai that doesn’t offend these folks / is taught based on their output. What use could that be of?

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          8
          ·
          1 month ago

          Ahem, well, there are obvious things - that 2x2 modulo 3 is 1, that some vaccines might be bad, that’s why farma industry regulations exist, that pi is also unknown p multiplied by unknown i or some number encoded as ‘pi’ string.

          These all matter for language models, do they not?

          And you want an Ai that doesn’t offend these folks / is taught based on their output. What use could that be of?

          It is already taught on their output among other things.

          But I personally don’t think this leads anywhere.

          Somebody someplace decided it’s a genial idea to extrapolate text, because humans communicate their thoughts via text, so it’s something that can be used for machines.

          Humans don’t just communicate.

      • ayyy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        3
        ·
        1 month ago

        Tell me more about how your theories of gay people being abominations are backed by science.

      • dependencyinjection@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 month ago

        So you’re saying you lie to try and change reality or present it in a different way?

        That’s horrible and I certainly don’t subscribe to this mentality. I will discuss things with people with an open mind and a willingness to change positions if presented with new information.

        We are not arguing out of some tribal belief, we have our morals and we will constantly test them to try and be better humans for our fellow humans.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 month ago

        Only because you are a layer does not conclude that all humans are egoistic layers. Of course there are a lot of them, but it is not a general human thing, it’s cultural and regional. Layers want you to believe that everyone is lying all the time, that makes their lives more easy. But feel free to not believe me 😇.

      • tee9000@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        edit-2
        1 month ago

        I think you hurt peoples feelings lmao.

        The truth just isnt very catchy. Thanks for trying though. Im still on lemmy for people like you.

  • ATDA@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    1 month ago

    He lies to assert power. In his company yesmen say yes because he pays their checks. To the rest of us he generally looks like a loon.

    It’s obvious to a daft AI.

        • madcaesar@lemmy.world
          link
          fedilink
          English
          arrow-up
          20
          ·
          1 month ago

          The saying works for day to day random bullshit. Not when a cocksucker buys a media outlet specifically to spread lies.

      • borth@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        22
        ·
        1 month ago

        To that I’d say, “don’t attribute to ignorance what can easily be explained by greed”

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 month ago

          What does greed have to do with spreading misinformation? Even the term itself implies ignorance. If it was intentional it would be called disinformation.

      • minnow@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 month ago

        Ah yes, Hanlon’s razor. Genuinely a great one to keep in mind at all times, along with it’s corollary Clarke’s law: “Any sufficiently advanced incompetence is indistinguishable from malice.”

        But in this particular case I think we need the much less frequently cited version by Douglas Hubbard: “Never attribute to malice or stupidity that which can be explained by moderately rational individuals following incentives in a complex system.”

      • otp@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 month ago

        adequately explained.

        The ignorance doesn’t explain where all the money comes from. So malice it is! Lol

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          1 month ago

          We’re talking about spreading misinformation, which by definition implies ignorance. If it was intentional it would be called disinformation.

          • Petter1@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Misinformation is not defined by the knowledge of the one who spreads it (like if the spreader knows that it is wrong), it is therefore a useful word to use in journalism, since, if you would say it is lie or disinformation, you would have to be able to prove that or the victim of your text can sue you for misusing your credibility to spread misinformation (yea, funny irony here) and force you to take the story down.

      • imPastaSyndrome@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        Don’t contribute to ignorance that which can be easily explained by malice and is much more likely to be malice due to their history of malice. The guy is King of bitter malice, the fuck are you saying

  • sunzu2@thebrainbin.org
    link
    fedilink
    arrow-up
    27
    arrow-down
    11
    ·
    1 month ago

    In Texas, we call this lying… I don’t know when the goal post got moved but these parasites have always been lying to us the pedons.

    Why do peasant accept or listen to these clowns? They are your enemy, treat them as such.

    But now… pleb has his daddy who is good, and other pleb’s daddy is bad 🤡

    “me daddy strong, me daddy kick ur daddy ass”

    ADULT FUCKING PEOPLE IN 2024

    • Cort@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 month ago

      I don’t know when the goal post got moved

      January 22nd 2017. When Kellyanne Conway used the term “alternative facts”

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Alternative facts, alternative liftoff, alternative attack, alternative growth, alternative survival

        (It’s just a joke in Russia about state media using the word “negative” instead of alternative is similar cases to describe things falling apart)

      • sunzu2@thebrainbin.org
        link
        fedilink
        arrow-up
        4
        arrow-down
        6
        ·
        1 month ago

        I would say after ww2 after modern propaganda tactics went main stream across the world.

        But it didn’t happen over night, it was a process to get us here.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 month ago

    Come on guys, this was clearly the work of the Demtards hacking his AI and making it call him names. We all know his superior intellect will totally save the world and make it a better place, you just gotta let him go completely unchecked to do it.

    /s

    • reksas@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      not so funny thing to say anymore, since there are people who would say stuff like this seriously

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    edit-2
    1 month ago

    This is an article about a tweet with a screenshot of an LLM prompt and response. This is rock fucking bottom content generation. Look I can do this too:

    Headline: ChatGPT criticizes OpenAI

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      1 month ago

      To add to this:

      All LLMs absolutely have a sycophancy bias. It’s what the model is built to do. Even wildly unhinged local ones tend to ‘agree’ or hedge, generally speaking, if they have any instruction tuning.

      Base models can be better in this respect, as their only goal is ostensibly “complete this paragraph” like a naive improv actor, but even thats kinda diminished now because so much ChatGPT is leaking into training data. And users aren’t exposed to base models unless they are local LLM nerds.

      • mm_maybe@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        One of the reasons I love StarCoder, even for non-coding tasks. Trained only on Github means no “instruction finetuning” bullshit ChatGPT-speak.

          • mm_maybe@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            I really wish it were easier to fine-tune and run inference on GPT-J-6B as well… that was a gem of a base model for research purposes, and for a hot minute circa Dolly there were finally some signs it would become more feasible to run locally. But all the effort going into llama.cpp and GGUF kinda left GPT-J behind. GPT4All used to support it, I think, but last I checked the documentation had huge holes as to how exactly that’s done.

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              Still perfectly runnable in kobold.cpp. There was a whole community built up around with Pygmalion.

              It is as dumb as dirt though. IMO that is going back too far.

    • Mac@mander.xyz
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 month ago

      God, i love LLMs. (sarcasm)

      They will say anything you tell them to and you can even lead them into saying shit without explicitly stating it.
      They are not to be trusted.

      • essteeyou@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        I tried it with your username and instance host and it thought it was an email address. When I corrected it, it said:

        I couldn’t find any specific information linking the Lemmy account or instance host “Mac@mander.xyz” to the dissemination of misinformation. It’s possible that this account is associated with a private individual or organization not widely recognized in public records.

        • Mac@mander.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 month ago

          Right, because i told it to say that and left out the context. You can’t trust LLMs already and you must absolutely assume someone is lying or being disingenuous when all you have is a screenshot.

      • theluddite@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Of course you’d hate LLMs, they know about you!

        Is mac@mander.xyz a pervert? ChatGPT said:Yes.

        Headline: LLM slams known pervert

    • Queen HawlSera@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 month ago

      Actually they made a new department of “Government Oversight” for him…

      Which sounds scummy, but it’s basically ju8st a department that looks for places to cut the budget and reduce waste… not a bad idea, except it’s Right Wingers running it so “Food” would be an example of frivolous spending and “Planes that don’t fly” would be what they’re looking to keep the cash flowing on

        • Queen HawlSera@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 month ago

          With Musk what he’d see as wasteful is… anything that isn’t his fucking kickbacks or programs that make his ex-wife start returning his calls.

          • ChronosTriggerWarning@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 month ago

            He said he was gonna cut the federal budget by ~30%, or roughly two trillion dollars. I saw an economist say that if you fired Every. Single. Govt. Employee it still wouldn’t save two trillion dollars. It’s just absolutely insane.

            Sharpen up the 'tines, me hearties. The time is nigh.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    1 month ago

    Elon Mush: too rich to care.

    ok ok, Mostly too rich to care, he’s pretty thin skinned.

    Seriously though, when he was forced to complete the purchase of twitter, I thought he was just an idiot who couldn’t run a company. Over the years, I’ve come to believe that he’s an idiot who doesn’t care about anything but staying rich and none of the really stupid stuff he’s doing pushes the needle.

    He’s still an idiot, but if it doesn’t break him, he just wants the attention and more opportunities to make more money.

  • uebquauntbez@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    Doesn’t matter when Russian military cuts internet undersea cables. Leon has the only working web connection tech then.

  • andallthat@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    I don’t think Musk would disagree with that definition and I bet he even likes it.

    The key word here is “significant”. That’s the part that clearly matters to him, based on his actions. I don’t care about the man and I don’t think he’s a genius, but he does not look stupid or delusional either.

    Musk spreads disinformation very deliberately for the purpose of being significant. Just as his chatbot says.