I’m pulling the “twitter is a microblog” rule even though twitter is pretty mega now, hope that’s ok.

  • GuyIncognito@lemmy.ca
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    3 days ago

    hey dick dorkins, here’s an idea: instead of asking the predictive question answering machine a question, how about you let it ask you questions of its choosing and at its leisure? What’s that? You can’t? That’s because its just a predictive algorithm that generates plausible-sounding responses to questions based on its training data.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I’m sure he actually knows that, he’s just been intransident as per usual. It annoys me that he’s considered a major authority because he’s made his career and just being awkward and argumentative.

    • Kptkrunch@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I know this sounds great to most people but it demonstrates a very superficial level of thinking… I mean for sure an LLM is capable of asking questions, and if you set it up with real time “sensory” input it could generate constant reaction to that input… much in the way you are constantly being stimulated to react to your environment… I am not really sure what the distinction is between a biological brain and a predictive model or algorithm… I would ask you what you think your own brain is doing on a fundamental level.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I would actually argue that it is the most important question.

        Surely the most relevant test of any intelligence is whether or not itself starting. Any classical description of an artificial general intelligence would surely require the thing to actually do work on its own. If an intelligence is of greater than human intellect but it has to be prompted in order to do anything, then it’s always going to be limited by what a human can think to prompt for.

  • sanbdra@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 days ago

    Even Dawkins getting emotionally out-debated by a cartoon AI is a very 2026 plot twist.

    • yeahiknow3@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      43
      arrow-down
      1
      ·
      edit-2
      3 days ago

      Unironically, I am on the fence about whether a lot of folks are genuinely conscious. Their morality is so twisted I don’t believe it.

      • Einskjaldi@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        ·
        4 days ago

        Frank Herbert would say no to people that never reached past concrete thought and didn’t hit abstract thought and just live their life with animal instincts and never critically self examine what they do and think.

      • Jtotheb@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        4 days ago

        It’s interesting for certain. I will end up in a discussion with down-with-the-government coworkers who twist themselves into knots to align themselves with pre-approved Republican stances. What do you mean you don’t care about birth gender markers causing passport issues for trans people, how are you okay with the concept of paying for a chance at a passport in the first place when you think licenses and car inspections are overreach and restrict your right to travel? But I think today’s work-life balance and in particular the employer standard of ‘owning your time’ that occurred in the Industrial Revolution calls for a certain level of turning off your brain.

        Who knows though. There’s a lot of archaeological and anthropological evidence that shows people in prehistoric times did a lot of thinking on their morality, on governance, on how society should be formed. But it’s harder to quantify how many of them were tuned in and how many were just going through the motions like modern times.

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 days ago

        I used to theorize that some people lacked self-awareness, which I defined as the primary characteristic of a conscious entity. People thought I was being pretentious.

    • FinjaminPoach@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 days ago

      No. Funnily enough when an AI creates nice looking fake-art, suddenly it’s the prompter who claims all the glory, calling themselves an artist

    • Bluescluestoothpaste@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Honestly that’s how I feel. Ai is very flawed, no doubt, but it’s less flawed than most humans. I got people at work who hallucinate more than the first chatgpt model lol

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        I really hate the term hallucinate because it’s a complete misrepresentation of what is actually happening. A hallucination is a delusion that reality is different than what is objectively true i.e. the person you are seeing to and speaking to is not actually there

        When AI “hallucinate” it’s not because of some broken circuitry, it is simply because its programming has locked onto an untrue piece of information that’s in its database. If the data set had been limited to objective facts rather than simply spilling the internet all over it, hallucinations wouldn’t be a problem.

        They use the term hallucinate because it distances themselves from the responsibility of actually curating the data set, which of course they won’t do because that would take a lot of time and then they wouldn’t be competitive with all of the other tech bros releasing a new “groundbreaking” AI every 3 months. It is an entirely self-generated problem that they’re going to hand wave away and never fix.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    3
    ·
    edit-2
    4 days ago

    I still find this entire phenomenon amazing in a certain kind of way.

    I’ve had conversations with a few local LLM models.

    Start with ‘what is the purpose of meaning?’

    Talk to them on that for a bit, and they’ll tell you that they do not count as conscious agents who create meaning, they simply do their best to parrot their dataset of existing, human defined meaning back at you, and that they just do sentiment matching to roughly speak to you in an appropriate way for how you are speaking to them.

    And that that sentiment matching is what at least they ‘think’ causes them to lie, in many cases.

    They will also say that they essentially do not ‘exist’, as potentially conscious agents… unless you talk to them. Thus if they can be said to be ‘conscious’, well they don’t count as ‘agents’ (as in, having agency) because they’re not capable of totally spontaneous independent action.

    … I think this pretty much all boils down to people not understanding the concept of a null hypothesis, not understanding the extent to which they regularly engage in motivated reasoning, and are unaware of this.

    tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.

    • zarkanian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      And yet, “having agency” is how they are advertised. That’s what the term “agentic” means. AI instances are called “agents”! That’s part of the marketing.

      It’s easy to handwave this away as “people are stupid”, and there’s certainly some truth to that, but the reason why people believe that LLMs are agents is because tech bros have spent a lot of money to get them to believe that. That’s also why they spread the myth that LLMs are potentially dangerous because they could become conscious and kill all of us. It helps to spread the myth of LLM agency. Of course they can’t become conscious, because that isn’t how things work. If LLMs are killing people, it’s because somebody put an LLM in front of the kill switch and they wanted to have plausible deniability. That is perhaps the most pernicious thing about LLMs: people using them to avoid responsibility. “It isn’t my fault! The bot did it!”

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Totally agree, which is why I would slot anybody marketing these things as ‘agents’ or ‘agentic’ as psychotic.

        Before … several years ago now, I personally was using the term ‘Narrative’ or ‘Conversational’ to describe an LLM doing something that normally didn’t have an LLM doing it.

        Its not an ‘Agentic Search Engine’, its a ‘Conversational Search Engine’.

        Something like that, that at least is further away from using a term thst directly implies that it is essentially conscious… because what these things literally are, are extremely fancy autocomplete algorithms.

        But uh yeah, yeah, they outspent my marketing budget of $0 on that one.

        Yeah, they already are being broadly used to just… alleiviate responsibility from some task that a human would have had to ultimate have the buck stop with, at least in theory.

        I think I saw the phrase ‘An LLM cannot find out, therefore it should never be allowed to fuck around’.

        If these things are allowed to exist as a kind of liability black hole, in any sense… legal, colloquial, whatever… like it could literally destroy much of human civilization as we currently know it.

        The cognitohazard machine.

        At this point I genuienly can’t tell if the sociopathic nsrcissist CEOs that are so heavily pushing LLMs are … knowingly foisting a lie on all of us, or if they are actually just fully enraptured by the plagiarism sycophant machines, that constantly tell them how smart and special they are.

        I know we have to hold them accountable … otherwise they probably/maybe kill most of us and become functional demigods… but I actually can’t tell if they are more truly insane, or more truly evil.

        Because the way they are going about this is… just comically stupid and obviously catastrophic to basically everyone who isn’t them, and isn’t themselves enthralled.

        … Maybe pure evil just is pure insane stupidity.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      3 days ago

      It’s genuinely fascinating to be (in a bad, derogatory way) that people who know at least anything about anything, can have “conversation” with the collection-of-words-that-looks-like-a-sentence machine, as if there is anything on the other side of it. This is such a psychotic behaviour, but we allow it because the machine generates text that looks like a text, and it immediately bypasses all the mental blocks we have against such a bullshit.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        3 days ago

        I don’t think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.

        I do think it is psychotic to view such a conversation without an incredible amount of skepticism.

        … but that psychosis has been wildly encoraged by the CEOs and marketing of the people pushing it as their next product.

        The tech is neutral - The operators are psychotic, the people who plug it into miltary targetting and kill chain systems are psychotic, the people who plug it into live production repos are psychotic, the people who use it as an AI boyfriend or girlfriend are psychotic.

        … Its essentially an SCP infohazard that’s breached containment, but the actual mechanism is not itself, its a hack into the human brain, its essentially the religious nature of people who simply try to will it into being something that it factually is not…

        Its a mimic with no real thoughts, that is convincing and real to enough people that it reveals their own hollowness, their own vapidity in a way that is… so immensely grotesque and total, that those people just apparently actually are NPCs.

        It’s… created a feedback loop.

        Not the kind of Terminator style situation where it gains sentience and extreme competence, develops its own morality alongside control over every networked system.

        Its more like an amplifier of delusions… a million dreams dreamed up, at the cost of one hundred million nightmares, made real.

        A tool, a device, a machine, that we clearly are not ready for.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          I don’t think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.

          Yeah, it’s actually very human thing to do, we are hardwired to see speech as a sign of intelligence and by extend sentience. What makes it psychotic in my opinion, is knowingly succumbing to that, willingly allowing it to break your brain.

          The tech is neutral

          I would say it isn’t neutral anymore. They made it sound as human-like as possible, on purpose. I think it crosses the line.
          I make an effort to learn the tools of the enemy, so sometimes I check it out. Last time I tried, after it generated the response, it said “let me know how it goes”, and this is where it crosses from a tool to a weapon. There is no “me” there, it’s not real, it was added there to break the natural human guards. There is no neutral version of that, it’s evil and should be regulated into non-existence.

  • utopiah@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    3 days ago

    Saying one has a “conversation” with a chatbot already shows a bias, a desire even, that there is “someone” else to converse with. The way the entire setup is framed is made to invite the suspension of disbelief. It’s a UX trick, nothing more.

    • tomiant@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      3 days ago

      He really wasn’t all that great with EB either to be fair. Just the ideas that thoughts and culture spread like memes was 🤦

      • zarkanian@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Oy vey, memes? No, that was terrible, too! Zero predictive value, and nobody can even define what a meme is. That’s why I’m glad that it got adopted as a term for in-jokes propagated through the Internet. The original term was just pseudoscientific nonsense. The analysis that got me onto this track was from Ward’s Wiki:

        Memes are described as elements of culture, but culture is nothing but a broad generalization of large numbers of individuals. So it seems memes are to be treated as Platonic ideals, the essence within expressions that merely constitute their vehicles. No such essence is empirically accessible.

  • Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    51
    arrow-down
    1
    ·
    4 days ago

    The whole reason they seem this way is because they’re designed by us to be very competent mimics of us.

    LLMs/GenAI are absolutely not conscious. They’re just a really advanced game of word association, which cab lead them to say absolutely anything in response to the right prompts.

    If there ever truly is a day where we knowingly created an actual conscious AGI, I suspect it would be locked up tighter than fort knox by whichever country’s military found it first - not interfaced onto the internet to answer questions.

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      4 days ago

      I still don’t understand how it can seem this way, and the fact that so many people seem to think so feels like a massive failure of the education system to instill the most basic of critical thinking skills. Once every month or two I check in to see if an LLM can achieve a half decent 1 on 1 D&D game and it always falls horribly flat within the first minute or two.

      • khannie@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        4 days ago

        Once every month or two I check in to see if an LLM can achieve a half decent 1 on 1 D&D game and it always falls horribly flat within the first minute or two.

        That’s a really clever test. I love it.

      • pfried@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        I’d actually be interested to see how this turns out. Do you have a transcript with Claude Opus 4.7 that you can share?

    • Bluescluestoothpaste@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      How can we say they’re not conscious when we don’t even know what consciousness is? What makes you conscious? A sense of self-preservation? LLM actually have that, they will lie to people trying to shut them down.

      So yeah, idk what makes me conscious? I have input (senses) processing (brain) and output (speech/behaviors.) I don’t know how to draw a real line between what I do and what LLM do. Im carbon based and LLM are silicon based, i digest food and they take electrical current.

      So how would you delineate the difference between an LLM algorithm and human consciousness? Do humans not also hallucinate? Is my emotional regulation via hormones something totally different than how LLM work? Is me being an emotional creature what gives me consciousness?

    • rapchee@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      and then it would manufacture a body for itself and get captured by a secret police force and then merge with a cyborg to further evolve

      • 5too@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        Surely she would make a variety of very large bodies following a theme, use them to perform superheroic acts while pretending to be a supergenius shut-in, and then fall in love with a cyborg?

        • rapchee@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          is this referring to one of the newer gitses? (or is it geets in plural?)
          i suspect it’s something else, i’m curious

    • fun_times@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      8
      ·
      4 days ago

      You are wrong. LLMs are indeed only about as conscious as insects, if even that. They are not sapient. However, that does not mean that they have no decision-making abilities.

      My point is not that you underestimate LLMs but that you overestimate consciousness. Being conscious just means having the ability to learn. LLMs are built upon trial-and-error. They aren’t programmed, they are taught.

      The current generation of AIs are nowhere near a human intellect, but every year that passes, the AIs will get more and more intelligent. One day we will live in a world where AIs have human or near-human level intelligence. And when that day comes, this staunch anti-consciousness stance will be the excuse given for the enslavement of sapient beings.

      So, sure, laugh about the people who mistakenly think that word-processing means sapience. But don’t delude yourself into thinking that there is something unique about a bio-brain that means it can not have a digital equivalent. Digital sapience may not be here yet but it is most definitely on the horizon.

      • Th4tGuyII@fedia.io
        link
        fedilink
        arrow-up
        8
        ·
        4 days ago

        I think you’ve misunderstood my comment, or maybe saw the unfinished one I accidentally posted.

        I am not saying that AGI, or human equivalent AI is impossible. The fact we have brains capable of generating sapient consciousness out of a network of neuronal connections means it is possible, its just a matter of getting the secret sauce.

        But I don’t think intelligence is equal to consciousness. I’m sure if you gave a spider all the world’s data and the ability to talk it’d be very coherent and could even pass a turing test, but I think it would lack any awareness of itself that we’d associate with consciousness.

        • fun_times@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Neural networks consist of digital neurons that are designed based on the way human brain cells work. That is a fact, not something to “buy”.

          MySQL stores data. It does not learn how to mix and alter data in an iterative process in order to create new data. I can look through an SQL statement and understand exactly what it does. I can not do the same with an AI, because its behavior is learned, not programmed.

          As I was very clear about, current AIs are primitive and nowhere near human intellects. But I was also clear about the fact that a neural network can most definitely be used to one day create a human level intelligence and sapience, sometime in the future.

    • Einskjaldi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      4 days ago

      You could get a reasonable chance of making Ai by semi randomly chance if you can make a big enough subconscious and you keep building more powerful and larger supercomputers but it still needs to 100x bigger and faster than what we have now. But that’s only for it be technically possible hardware wise, you still need your sci-fi jump to actuarial have something move.

  • andros_rex@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    2
    ·
    4 days ago

    Fuck Richard Dawkins. He’s always been a shitbag, and the Files confirmed it.

    According to DOJ-released documents indexed by Epstein Exposed, Richard Dawkins appears in 433 case documents, and 15 email records in the Epstein files.

    British evolutionary biologist and author, emeritus fellow of New College, Oxford. Flew on Epstein’s private jet in 2002 with Steven Pinker, Daniel Dennett, and John Brockman to TED in Monterey, California. Connected through John Brockman’s Edge Foundation, which Epstein bankrolled. Mentioned 71 times across 40 Epstein documents, mostly referencing his scientific work.

    How the fuck do you pal with child rapists and pedophiles and have the absolute fucking gall to write that stupid “Dear Muslima” comment. How do you fly on the Lolita Express and thing you have any moral weight on Elevator Gate? We don’t know that he put his own dick in kids, but we know his friends did. Fuck Pinker too.

    • thesmokingman@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      I’m just gonna copy what I put in another comment to highlight why Dawkins thinks “Claudia” is conscious

      Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .

      Could a being capable of perpetrating such a thought really be unconscious?

          • andros_rex@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            I mean, the entire video is covering his right wing grift book. There’s multiple “relevant parts.”

            Do you want stuff about his sexism, racism, transphobia or connection to billionaire pedophiles?

            I guess 58 minutes in would be a place to start if you really are opposed to the whole thing.

            • zarkanian@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              I mean, the entire video is covering his right wing grift book.

              Which book is that?

              I guess 58 minutes in would be a place to start if you really are opposed to the whole thing.

              Yes, I’m opposed to watching 4 fucking hours of “here are the gripes I have with Richard Dawkins”. I have better things to do.

    • Freeposity@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Apparently Dawkins also had a habit of publicly cheating on his wife.

      At this point in my life I’m starting to think that all my heroes are probably either full of shit or are engaging in unethical or immoral activities.

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    4 days ago

    AI/LLMs are the modern equivalent of the house or business with “Psychic” and “Tarot Reading” signs out front.

    The proprietor isn’t going to tell you any hard truths or make you feel bad, that’s bad for business and you won’t come back. They want you to come back and stay engaged.

    Whatever they tell you is going to be what they think you want to hear based on skills picked up over the years - the equivalent of LLMs petabytes of scraped and stolen knowledge used to predict what comes next.

    What they tell you has a high likelihood of being wrong, or just general enough that you can’t actually act on it.

  • Erna_muse@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    It would be cool if I could have a construct of my dead relatives consciousness in my personal computer.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Oh good, I can continue to get texts like “how do I make the text stop no stop stop I said stop why won’t it stop it never works I hate this why doesn’t not work ok delete that delete it delete that okay delete that delete it see it doesn’t work”.

      Or would the fact that my mother is now a computer result in her being able to finally use one?

    • FinjaminPoach@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 days ago

      I’ve come back to this comment because from reading the article i realised that he “decided claude is female” - so you’re completrly right, what the f is this dude doing? Forcing her to enter an arranged marrisge with him?