I’'m curious about the strong negative feelings towards AI and LLMs. While I don’t defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

  • Myro@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    6 days ago

    Many people on Lemmy are extremely negative towards AI which is unfortunate. There are MANY dangers, but there are also Many obvious use cases where AI can be of help (summarizing a meeting, cleaning up any text etc.)

    Yes, the wax how these models have been trained is shameful, but unfoet9tjat ship has sailed, let’s be honest.

  • Kyrgizion@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    3
    ·
    8 days ago

    Because the goal of “AI” is to make the grand majority of us all obsolete. The billion-dollar question AI is trying to solve is “why should we continue to pay wages?”. That is bad for everyone who isn’t part of the owner class. Even if you personally benefit from using it to make yourself more productive/creative/… the data you input can and WILL eventually be used against you.

    If you only self-host and know what you’re doing, this might be somewhat different, but it still won’t stop the big guys from trying to swallow all the others whole.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      3
      ·
      8 days ago

      Reads like a rant against the industrial revolution. “The industry is only concerned about replacing workers with steam engines!”

      • Kyrgizion@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        8 days ago

        You’re probably not wrong. It’s definitely along the same lines… although the repercussions of this particular one will be infinitely greater than those of the industrial revolution.

        Also, industrialization made for better products because of better manufacturing processes. I’m by no means sure we can say the same about AI. Maybe some day, but today it’s just “an advanced dumbass” considering most real world scenarios.

      • chloroken@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        8 days ago

        Read ‘The Communist Manifesto’ if you’d like to understand in which ways the bourgeoisie used the industrial revolution to hurt the proletariat, exactly as they are with AI.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          The industrial revolution is what made socialism possible, since now a smaller amount of workers can support the elderly, children, etc.

          Just look at China before and after industrializing. Life expectancy way up, the government can provide services like public transit and medicine (for a nominal fee)

          • chloroken@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            6 days ago

            We’re discussing how industry and technology are used against the proletariat, not how state economies form. You can read the pamphlet referenced in the previous post if you’d like to understand the topic at hand.

    • Mrkawfee@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 days ago

      the data you input can and WILL eventually be used against you.

      Can you expand further on this?

      • Kyrgizion@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 days ago

        User data has been the internet’s greatest treasure trove since the advent of Google. LLM’s are perfectly set up to extract the most intimate data available from their users (“mental health” conversations, financial advice, …) which can be used against them in a soft way (higher prices when looking for mental health help) or they can be used to outright manipulate or blackmail you.

        Regardless, there is no scenario in which the end user wins.

  • jyl@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    34
    ·
    edit-2
    8 days ago
    • Useless fake spam content.
    • Posting AI slop ruins the “social” part of social media. You’re not reading real human thoughts anymore, just statistically plausible words.
    • Same with machine-generated “art”. What’s the point?
    • AI companies are leeches; they steal work for the purpose of undercutting the original creators with derivative content.
    • Vibe coders produce utter garbage that nobody, especially not themselves understands, and somehow are smug about it.
    • A lot of AI stuff is a useless waste of resources.

    Most of the hate is justified IMO, but a couple weeks ago I died on the hill arguing that an LLM can be useful as a code documentation search engine. Once the train started, even a reply that thought software libraries contain books got upvotes.

  • Illecors@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    6
    ·
    8 days ago

    There is no AI.

    What’s sold as an expert is actually a delusional graduate.

  • Cosmonauticus@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    3
    ·
    8 days ago

    I can only speak as an artist.

    Because it’s entire functionality is based on theft. Companies are stealing the works of ppl and profiting off of it with no payment to the artists who’s works its platform is based on.

    You often hear the argument that all artists borrow from others but if I created an anime that is blantantly copying the style of studio Ghibili I’d rightly be sued. On top of that AI is copying so obviously it recreates the watermarks from the original artists.

    Fuck AI

  • boatswain@infosec.pub
    cake
    link
    fedilink
    English
    arrow-up
    17
    ·
    8 days ago

    Because of studies like https://arxiv.org/abs/2211.03622:

    Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.

    • Dr_Nik@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      10
      ·
      8 days ago

      Seems like this is a good argument for specialization. Have AI make bad but fast code, pay specialty people to improve and make it secure when needed. My 2026 Furby with no connection to the outside world doesn’t need secure code, it just needs to make kids smile.

      • subignition@fedia.io
        link
        fedilink
        arrow-up
        17
        arrow-down
        1
        ·
        8 days ago

        They’re called programmers, and it’s faster and less expensive all around to just have humans do it better the first time.

        • Dr_Nik@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          9
          ·
          8 days ago

          Have you talked to any programmers about this? I know several who, in the past 6 months alone, have completely changed their view on exactly how effective AI is in automating parts of their coding. Not only are they using it, they are paying to use it because it gives them a personal return on investment…but you know, you can keep using that push lawnmower, just don’t complain when the kids next door run circles around you at a quarter the cost.

          • just_another_person@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            8 days ago

            Automating parts of something as a reference tool is a WILDLY different thing than differing to AI to finalize your code, which will be shitcode.

            Anybody right now who is programming that is letting AI code out there is bad at their job.

          • GnuLinuxDude@lemmy.ml
            link
            fedilink
            English
            arrow-up
            7
            ·
            8 days ago

            Have you had to code review someone who is obviously just committing AI bullshit? It is an incredible waste of time. I know people who learned pre-LLM (i.e. have functioning brains) and are practically on the verge of complete apathy from having to babysit ai code/coders, especially as their management keeps pushing people to use it. As in, they must use LLM as a performance metric.

  • EgoNo4@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    8 days ago

    Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution?

    Both.

  • troed@fedia.io
    link
    fedilink
    arrow-up
    12
    ·
    8 days ago

    Especially in coding?

    Actually, that’s where they are the least suited. Companies will spend more money on cleaning up bad code bases (not least from a security point of view) than is gained from “vibe coding”.

    Audio, art - anything that doesn’t need “bit perfect” output is another thing though.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      8 days ago

      There’s also the issue of people now flooding the internet with AI generated tutorials and documentation, making things even harder. I managed to botch the Linux on my Raspberry Pi so hard I couldn’t fix it easily, all thanks to a crappy AI generated tutorial on adding to path that I didn’t immediately spot.

      With art, it can’t really be controlled enough to be useful for anything much beyond spam machine, but spammers only care about social media clout and/or ad revenue.

  • Vanth@reddthat.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    8 days ago

    Don’t forget problems with everything around AI too. Like in the US, the Big Beautiful Bill (🤮) attempts to ban states from enforcing AI laws for ten years.

    And even more broadly what happens to the people who do lose jobs to AI? Safety nets are being actively burned down. Just saying “people are scared of new tech” ignores that AI will lead to a shift that we are not prepared for and people will suffer from it. It’s way bigger than a handful of new tech tools in a vacuum.

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    8 days ago

    AI is theft in the first place. None of the current engines have gotten their training data legally. The are based on pirated books and scraped content taken from websites that explicitely forbid use of their data for training LLMs.

    And all that to create mediocre parrots with dictionaries that are wrong half the time, and often enough give dangerous, even lethal advice, all while wasting power and computational resources.

  • SpicyLizards@reddthat.com
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    8 days ago

    Not much to win with.

    A fake bubble of broken technology that’s not capable of doing what is advertised, it’s environmentally destructive, its used for identification and genocide, it threatens and actually takes jobs, and concentrates money and power with the already wealthy.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      8 days ago

      It’s either broken and not capable or takes jobs.

      You can’t be both useless and destroying jobs at the same time

  • macniel@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 days ago

    AI companies need constantly new training data and straining open infrastructure with high volume requests. While they take everything out of others work they don’t give anything back. It’s literally asocial behaviour.

    • TheLeadenSea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      8 days ago

      What do you mean, they give open weights models back that anyone can use. Only the proprietary corporate AI is exploitative.

      • macniel@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        8 days ago

        Cool everyone can use the website they scraped the data from already.

        Also anyone can use open weights models? Even those without beefy systems? Please…

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    8 days ago

    It’s a massive new disruptive technology and people are scared of what changes it will bring. AI companies are putting out tons of propaganda both claiming AI can do anything and fear mongering that AI is going to surpass and subjugate us to back up that same narrative.

    Also, there is so much focus on democratizing content creation, which is at best a very mixed bag, and little attention is given to collaborative uses (which I think is where AI shines) because it’s so much harder to demonstrate, and it demands critical thinking skills and underlying knowledge.

    In short, everything AI is hyped as is a lie, and that’s all most people see. When you’re poking around with it, you’re most likely to just ask it to do something for you: write a paper, create a picture, whatever, and the results won’t impress anyone actually good at those things, and impress the fuck out of people who don’t know any better.

    This simultaneously reinforces two things to two different groups: AI is utter garbage and AI is smarter than half the people you know and is going to take all the jobs.