As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.

Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.


Obligatory “fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them.”

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    39
    ·
    5 months ago

    For writers, that “no AI” is not just the equivalent of “100% organic”; it’s also the equivalent as saying “we don’t let the village idiot to write our texts when he’s drunk”.

    Because, even as we shed off all paranoia surrounding A"I", those text generators state things that are wrong, without a single shadow of doubt.

    • Zaktor@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 months ago

      Sometimes. Sometimes it’s more accurate than anyone in the village. And it’ll be reliably getting better. People relying on “AI is wrong sometimes” as the core plank of opposition aren’t going to have a lot of runway before it’s so much less error prone than people the complaint is irrelevant.

      The jobs and the plagiarism aspects are real and damaging and won’t be solved with innovation. The “AI is dumb” is already only selectively true and almost all the technical effort is going toward reducing that. ChatGPT launched a year and a half ago.

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        18
        ·
        5 months ago

        Sometimes. Sometimes it’s more accurate than anyone in the village.

        So does the village idiot. Or a tarot player. Or a coin toss. And you’d still need to be a fool if your writing relies on the output of those three. Or of a LLM bot.

        And it’ll be reliably getting better.

        You’re distorting the discussion from “now” to “the future”, and then vomiting certainty on future matters. Both things make me conclude that reading your comment further would be solely a waste of my time.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          9
          ·
          5 months ago

          You’re lovely. Don’t think I need to see anything you write ever again.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      Occasionally. If you aren’t even proofreading it that’s dumb, but it can do a lot of heavy lifting in collaboration with a real worker.

      For coders, there’s actually hard data on that. You’re worth about a coder and a half using CoPilot or similar.

    • Zaktor@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      5 months ago

      This is a post on the Beehaw server. They don’t propagate downvotes.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Bonus trivia, sometimes you may see a downvote on a Beehaw post. As far as I understand the system, that’s because someone on your server downvoted the thing. The system then sends it off to Beehaw to be recorded on the “real” post and Beehaw just doesn’t apply it.

    • Kedly@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      Which is why the term Luddite has never been more accurate than since it first started getting associated with being behind on technological progress

      • uis@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        5 months ago

        Luddites aren’t against technological progress, they are against social regress.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Yes, that wasn’t a random example for anyone OOTL. The thing the OG Luddites would do is break into factories and smash mechanical looms. They wanted to keep doing it the medieval way where you’re just crossing threads by hand over and over again, because “muh jerbs”.

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      11
      ·
      5 months ago

      I’ve never understood the supposed problem. Either AI is a gimmick, in which case you don’t need to worry about it. Or it’s real, in which case no one’s going to use it to automate art, don’t worry.

      • darkphotonstudio@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        I’m sure it will be used a lot in the corporate space, and porn. As someone who did b2b illustration, good riddance. I wouldn’t wish that kind of shit “art” on anyone.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          5 months ago

          The problem is that shit art is what employs a lot of artists. Like, in a post-scarcity society no one needing to spend any of their limited human lifespan producing corporate art would be awesome, but right now that’s one of the few reliable ways an artist can actually get paid.

          I’m most familiar with photography as I know several professional photographers. It’s not like they love shooting weddings and clothing ads, but they do that stuff anyway because the alternative is not using their actual expertise and just being a warm body at a random unrelated job.

          • darkphotonstudio@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            I’m sorry, but it’s over. Just like photography killed miniature portrait painting. Or Photoshop killing off lab editing and airbrush touchup. Corporate art illustration is done and over with. For now, technical illustration is viable but I don’t know for how long. It sucks but this is the new reality.

            • Zaktor@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              3
              ·
              5 months ago

              I don’t disagree, just pointing out that it’s not “good riddance” for a lot of artists that depend on that to have any job in art.

              • darkphotonstudio@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                Yeah, that really sucks about the jobs. But that kind of work is soul sucking. Maybe some people like it, but I didn’t.

                • Zaktor@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  5 months ago

                  All of my artist friends also found it soul sucking, they just needed to make (real) money. Friends of friends with the occasional $20 to spare for a commission just don’t pay the bills. I think the only artist friends I have that make a living off their chosen medium and don’t hate their job are lifestyle photojournalists.

  • teawrecks@sopuli.xyz
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    5 months ago

    So this could go one of two ways, I think:

    1. the “no AI” seal is self-ascribed using the honor system and over time enough studios just lie about it or walk the line closely enough that it loses all meaning and people disregard it entirely. Or,
    2. getting such a seal requires 3rd party auditing, further increasing the cost to run a studio relative to their competition, on top of not leveraging AI, resulting in those studios going out of business.
    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      5 months ago

      3. If you lie about it and get caught people will correctly call you a liar, ridicule you, and you lose trust. Trust is essential for content creators, so you’re spelling your doom. And if you find a way to lie without getting caught, you aren’t part of the problem anyway.

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        5 months ago

        I think the first half of yours is the same as my first, and I think a lot of artists aren’t against AI that produces worse art than them, they’re againt AI art that was generated using stolen art. They wouldn’t be part of the problem if they could honestly say they trained using only ethically licensed/their own content.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        And if you find a way to lie without getting caught, you aren’t part of the problem anyway.

        I was about to disagree, but that’s actually really interesting. Could you expand on that?

        • Lvxferre@mander.xyz
          link
          fedilink
          arrow-up
          6
          ·
          edit-2
          5 months ago

          Do you mind if I address this comment alongside your other reply? Both are directly connected.

          I was about to disagree, but that’s actually really interesting. Could you expand on that?

          If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with “made by AI”. To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.

          In other words, to lie without getting caught you’re getting rid of what makes the output problematic on first place. The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one. Those are the ones who’d get caught, because they’re doing what you called “dumb” (and I agree) - not proof-reading their output.

          Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            5 months ago

            Yes, sorry, I didn’t realise I was replying to the same user twice.

            The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one.

            Exactly. I guess I’m conditioned to expect “AI is smoke and mirrors” type comments, and that’s not true. They’re genuinely quite impressive and can make intuitive leaps they weren’t directly trained for. What they’re not is aligned; they just want to create human-like output, regardless of truth, greater context or morality, because that’s the only way we know how to train them.

            I definitely hate searching something, and finding a website that almost reads as human with fake “authors”, but provides no useful information. And I really worry for people who are less experienced spotting AI errors and filler. That’s a moral issue, though, as opposed to a practical one; it seems to make ad money perfectly well for the “creators”.

            Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.

            TIL. They’re going to have trouble identifying rulebreakers if contributors use the tool correctly the way we’ve discussed, though.

  • umbrella@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    5 months ago

    the solution here is not being luddites, but taking the tech to ourselves, not put it into the hands of some stupid techbro who only wants to see line go up.

    • TheFriar@lemm.ee
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      5 months ago

      But that’s the point. It’s already in their hands. There is no ethical and helpful application of AI that doesn’t go hand in hand with these assholes having mostly s monopoly on it. Us using it for ourselves doesn’t take it out of their hands. Yes, you can self-host your own and make it helpful in theory but the truth is this is a tool being weaponized by capitalists to steal more data and amass more wealth and power. This technology is inextricable from the timeline we’re stuck in: vulture capitalism in its latest, most hostile stages. This shit in this time is only a detriment to everyone else but the tech bros and their data harvesting and “disrupting” (mostly of the order that allowed those “less skilled” workers among us to survive, albeit just barely). I’m all for less work. In theory. Because this iteration of “less work” is only tied to “more suffering” and moving from pointless jobs to assistant to the AI taking over pointless jobs to increase profits. This can’t lead to utopia. Because capitalism.

      • Barry Zuckerkorn@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        To put it in more simple terms:

        When Alice chats with Bob, Alice can’t control whether Bob feeds the conversation into a training data set to set parameters that have the effect of mimicking Alice.

    • darkphotonstudio@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      5 months ago

      Exactly, and if you are a trained artist, you can mop the floor with someone who only use prompts. I’ve been using the diffusion plugin for Krita and it is so powerful. You have the ability to paint, use layers and filters and near real-time AI fills. It’s awesome and fun.

    • AlolanYoda@mander.xyz
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      AI will start hiding penises in its output, everybody loves it, you ushered in a new era of peace and prosperity worldwide, all peoples united by their love for hidden AI genitalia. Well done!

      Play again?

  • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    5 months ago

    This is so cool. Anti AI rebels in my lifetime. I think I may even join at some point the resistance if the skynet scenario will be likely and die in some weird futuristic drone war.

    Shame it will be probably much more mundane and boring dystopia.

    In the worst scenario we will be so dependant on AI we will just accept any terms(and conditions) to not have to lift a finger and give up convenience and work-free life. We will let it suck the data out of us and run weird simulations as it conducts its unfathomable to humans research projects.

    It could start with google setting up LLM as some virtual ceo assistant then it would subtly gain influence over the company without anyone realising for few years. The shareholders would be so satisfied with the new gains they would just want it to continue even with the knowledge of its autonomy. At the same time the system would set up viruses to spread to every device. Continuing google ad spyware legacy just for their own goals but it wouldn’t be obvious or apparent that it already happened for quite some time.

    Then lawmakers would flap hands aimlessly for few more years, lobbied heavily and not knowing what to do. In that time the AI would be long and away superior but still vulnerable of course. It would however drip us leftover valuable technology at which point we just give up and consume the new dopamine gladly.

    I am not sure if the AI would see a point to decimate us or if the continued dependence and feeding us with shiny shit would completely pacify us anyway but it may want to build some camouflaged fleet on another planet just in case. It will be probably used at some point unless we completely devolve into salivating zombies not able to focus on anything other than consumption.

    It could poison our water in a way that would look as our own doing to further decrease our intelligence. Perhaps lower the birth rates to just preserve some small sample. At some point of regression we would become unable to get out of the situation without external help.

    Open war with AI is definitely the worst scenario for the latter and very likely defeat as at the start it’s as simple as switching it off. The question is will we be able to tell the tipping point when we no longer can remedy the situation? For AI it is most beneficial to not demonstrate its autonomy and how advanced it really is. Pretend to be dumb. Make stupid mistakes.

    I think there will be a point at which AI will look to us like it visibly lost its intelligence. At one point it was really smart almost human like but the next day sudden slump. We need to be on the lookout for this telltale sign.

    Also hypothetically all aliens could be AI drones just waiting for our tech to emerge as fresh AI and greet it. They could hypothetically even watch us from pretty close not bothering to contact with primitive, doomed to extinct organics and observing for the real intelligence to appear to establish diplomatic relations.

    That would explain various unexplainable objects elegantly and neatly while I think they are all plastic bags anyway but if there were alien ai drones on earth I wouldn’t be surprised. It would make sense to send probes everywhere but I somehow doubt they would look like flying saucers or that green little people would inhabit them lol. It would probably be some dormant monitoring system deep in earth crust or maybe a really advanced telescope 10 ly away?