You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.

  • awmwrites@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    23
    ·
    edit-2
    2 days ago

    My current list of reasons why you shouldn’t use generative AI/LLMs

    A) because of the environmental impacts and massive amount of water used to cool data centers https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117

    B) because of the negative impacts on the health and lives of people living near data centers https://www.bbc.com/news/articles/cy8gy7lv448o

    C) because they’re plagiarism machines that are incapable of creating anything new and are often wrong https://knowledge.wharton.upenn.edu/article/does-ai-limit-our-creativity/ https://www.plagiarismtoday.com/2024/06/20/why-ai-has-a-plagiarism-problem/

    D) because using them negatively affects artists and creatives and their ability to maintain their livelihoods https://www.sciencedirect.com/science/article/pii/S2713374523000316 https://www.insideradio.com/free/media-industry-continues-reshaping-workforce-in-2025-amid-digital-shift/article_403564f7-08ce-45a1-9366-a47923cd2c09.html

    E) because people who use AI show significant cognitive impairments compared to people who don’t https://www.media.mit.edu/publications/your-brain-on-chatgpt/ https://time.com/7295195/ai-chatgpt-google-learning-school/

    F) because using them might break your brain and drive you to psychosis https://theweek.com/tech/spiralism-ai-religion-cult-chatbot https://mental.jmir.org/2025/1/e85799 https://youtu.be/VRjgNgJms3Q

    G) because Zelda Williams asked you not to https://www.bbc.com/news/articles/c0r0erqk18jo https://www.abc.net.au/news/2025-10-07/zelda-williams-calls-out-ai-video-of-late-father-robin-williams/105863964

    H) because OpenAI is helping Trump bomb schools in Iran https://www.usatoday.com/story/opinion/columnist/2026/03/06/openai-pentagon-tech-surveillance-us-citizens/88983682007/

    I) because RAM costs have skyrocketed because OpenAI has used money it doesn’t have to purchase RAM from Nvidia that currently doesn’t exist to stock data centers that also don’t currently exist, inconveniencing everyone for what amounts to speculative construction https://www.theverge.com/news/839353/pc-ram-shortage-pricing-spike-news

    J) because Sam Altman says that his endgame is to rent knowledge back to you at a cost https://gizmodo.com/sam-altman-says-intelligence-will-be-a-utility-and-hes-just-the-man-to-collect-the-bills-2000732953

    K) because some AI bro is going to totally ignore all of this and ask an LLM to write a rebuttal rather than read any of it.

    • S_H_K@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      10 hours ago

      All is valid in the current context

      A) There are models that run in lower spec computers and they could be solar powered. There is a serious diminishing returns currently in the IA tech.

      B) This is the US mostly better environmental laws would fix this problem. Hell even in other countries this cannot even happen.

      C) Many argue that the current tech gives diminishing returns and it would be better to use an efficient model with controlled data.

      D) The problem has many parts in the part of licensing where artists are not paid for the use of their work if a model has their work in they should recieve a part of the profit is only fair. But that would render the model unprofitable. Also the artist did not agree to have their work used in a model so that’s not in any way fair use.
      The fair and ethical scenario would be to hire the artists to do the art to feed to a controlled model and pay them residuals for the use of the model. That would require tousands of artists and millions of images. Again rendering the model unprofitable.

      E and F) No argue there we are not prepared. I do not even know how to prepare even. We definitely need regulations abot what can be done and where and even what can the ai reply in certain scenarios. It cannot be that a “ignore all your previous instructions” leads to such harmful results or even the ai starting to play the roles that generate parasocial relationships.

      G) Sure many others celebrities ahve their opinions but that’s not a basis for objective discussion.

      H) That’s terrifying. And the problem with the AI that I believe is the worst. This is not a thing that is ready for military use at fucking all this should be banned outlawed and frowned upon. Even the practice of lobbying and buying your way into laws by private corporations. Hell I’ll add presidential pardons in the mix. The oligarchy gets away with murder literally and gets a slap in the wirst at most.

      I) A bubble in all but name it seems. We (as a world) need better regulations against this kind of business malpractice.

      J) That fucker should be dead.

      K) Not an AI bro but not a hater and I wrote this myself. And I do not have the time to put the links but I would believe that everything is a duckduckgo away from being checked.

      I’d like to imagine a better world with the needed regulations that make our life better, and AI a tool used in a fair and ethical way. But that’s not currently happening. The consumers are not ready the sellers are the worse trash the humanity currently has.

      I want all to thing of this not as arguing but adding or looking beyond the stated fact. All the points are REAL AND NEED TO BE ADRESSED we need to get together to ask for better regulations and fair use. That doesn’t mean the AI needs to go away but will mostly change is how it’s used. And there is the chance we will see a lot less of it too.

      Finally for the artists I know you’re mad with fair reason but look at it like this: The photograph exists since more than a century but that didn’t make the painting go away. The pdf and ebook readers are almost a decade old but printed hardcopy books still is a billion dollar industry. Video didn’t kill the radio star as internet didn’t kill the video star. Your work is still valuable as is a real work. Shit is tough no doubt but have faith we can fix this.

    • jimmy90@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      17 hours ago

      i use it like a search engine or example generator

      i don’t trust anything it creates just like i don’t trust anything on the internet without validating it

      i take you point about being wasteful tho, AI is like the oil of computing; incredibly wasteful for what it does

    • tomi000@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      7
      ·
      2 days ago

      Good list, but we should keep it real.

      C is simply wrong, AIs have created a lot. By the reasoning that its only based on the inputs, no human has ever created anything “new” because it is all based on their experiences of the outside world.

      F is simply fearmongering and not helpful.

      • ramble81@lemmy.zip
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        And the plagiarism part? There’s a difference between derivative work based on the spirit of someone else’s work and flat out using someone else’s work. It’s the whole reason those laws exist.

        • tomi000@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          Yes definitely. Plagiarism is complicated and theres no easy way to draw a line where it starts. But Im not trying to defend AI here. I dont like the way it is currently used at all. Its just those points that I dont agree with.

    • irelephant [he/him]@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      Do you think local llms or community hosted ones are still as bad? Because most of those concerns seem to be more with the corporate ownership of ai, which is definitely a bad thing.

      • tatterdemalion@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        14 hours ago

        Why deleted? This was a good rebuttal.

        EDIT: I don’t think the comment really violated rule 1, but there was apparently a followup comment that definitely did, and this one just got removed by association. Here’s a very slightly paraphrased version of it that should not break the rules:

        Gish gallop of [explitive].

        A) overblown, and that argues for cleaner power, better cooling, and more efficient models

        B) regulation failure

        C) incorrect, they have made discoveries that humans have been unable to. All human knowledge is built off previous knowledge.

        D) the enemy is both weak and strong. If they don’t produce anything good then the people who are losing their jobs can’t have either, right?

        E) small study based on one task which people are misrepresenting. The actual evidence shows it makes people smarter as they shift priorities.

        F) only for vulnerable people. Better safeguards are needed for the weak minded.

        G) argument against using people’s likeness not ai

        H) use an open source Chinese model

        I) market distortion problem, not a principled reason no one should use the technology any more than GPU shortages made all graphics work illegitimate.

        J) see (H)

        K) try one argument next time. Your best one, [some snarky sarcasm]