• Vanth@reddthat.com
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    3
    ·
    4 months ago

    Is environmental impact on the top of anyones list for why they don’t like ChatGPT? It’s not on mine nor on anyones I have talked to.

    The two most common reasons I hear are 1) no trust in the companies hosting the tools to protect consumers and 2) rampant theft of IP to train LLM models.

    The author moves away from strict environmental focus despite claims to the contrary in their intro,

    This post is not about the broader climate impacts of AI beyond chatbots, or about whether AI is bad for other reasons

    […]

    Other Objections, This is all a gimmick anyway. Why not just use Google? ChatGPT doesn’t give better information

    … yet doesn’t address the most common criticisms.

    Worse, the author accuses anyone who pauses to think of the negatives of ChatGPT of being absurdly illogical.

    Being around a lot of adults freaking out over 3 Wh feels like I’m in a dream reality. It has the logic of a bad dream. Everyone is suddenly fixating on this absurd concept or rule that you can’t get a grasp of, and scolding you for not seeing the same thing. Posting long blog posts is my attempt to get out of the weird dream reality this discourse has created.

    IDK what logical fallacy this is but claiming people are “freaking out over 3Wh” is very disingenuous.

    Rating as basic content: 2/10, poor and disingenuous argument

    Rating as example of AI writing: 5/10, I’ve certainly seen worse AI slop

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      19
      ·
      4 months ago

      Thank you for your considered and articulate comment

      What do you think about the significant difference in attitude between comments here and in (quite serious) programming communities like https://lobste.rs/s/bxixuu/cheat_sheet_for_why_using_chatgpt_is_not

      Are we in different echo chambers? Is chatgpt a uniquely powerful tool for programmers? Is social media a fundamentally Luddite mechanism?

      • Vanth@reddthat.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        4 months ago

        I’m curious if you can articulate the difference between being critical of how a particular technology is owned and managed versus being a Luddite?

        • anus@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          I think I’m on board with arguing against how LLMs are being owned and managed, so I don’t really have much to say

      • Rooki@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        I would say GitHub copilot ( that uses a gpt model ) uses more Wh than chatgpt, because it gets blasted more queries on average because the “AI” autocomplete just triggers almost every time you stop typing or on random occasions.

  • jonathan@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 months ago

    ChatGPT energy costs are highly variable depending on context length and model used. How have you factored that in?

  • superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    4 months ago

    tl/dr: “Yes it is, but not as much as other things so stop worrying.”

    What a bullshit take.

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      What makes this a bullshit take? Focusing attention on actual problems is a great way to make progress

  • Takapapatapaka@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    4 months ago

    I was very sceptical at first, but this article kinda convinced me. I think it still has some bad biases (it often only considers 1 chatgpt request in its comparisons, when in reality you quickly make dozens of them, it often says ‘how weird to try and save tiny amounts of energy’ when we do that already with lights when leaving rooms, water when brushing teeths, it focuses on energy (to train, cool and generate electricity) and not on logistics and hardware required), but overall two arguments got me :

    • one chatgpt request seems to consume around 3Wh, which is relatively low
    • even with daily billions of requests, chatbots seems to represent less than 5% of AI power consumption, which is the real problem and lies in the hand of corporates.

    Still probably cant hurt to boycott that stuff, but it’d be more useful to use less social media, especially those with videos or pictures, and watch videos in 140p

      • NeilBrü@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        Oof, ok, my apologies.

        I am, admittedly, “GPU rich”; I have ~48GB of VRAM at my disposal on my main workstation, and 24GB on my gaming rig. Thus, I am using Q8 and Q6_L quantized .gguf files.

        Naturally, my experience with the “fidelity” of my LLM models re: hallucinations would be better.

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      8
      ·
      4 months ago

      I actually think that (presently) self hosted LLMs are much worse for hallucination