• darthelmet@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    2
    ·
    6 months ago

    No. But not because AI isn’t gonna get better, but because hype is an ever moving goal post. Nobody gets excited about what’s already possible. Hype lives on vague promises of some amazing future that is right around the corner we promise. Then by the time it becomes apparent that a lot of the claims were nonsense and the actual developments were steadier and less dramatic, they’ve already moved onto new wild claims.

    • kent_eh@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 months ago

      because hype is an ever moving goal post.

      That’s it exactly.

      Nothing ever lives up to its hype because the hype is setting unachievable expectations.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      So true, but especially true of ai. Previous rounds of hype for ai tended to turn into boring things that just worked, and the hype moved on. Even automated driving, where ai really hasn’t delivered yet, has turned into boring everyday ho hum features common to cars, and the hype moved on to generative ai

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    6
    ·
    edit-2
    6 months ago

    Warning, here’s the cynic in me coming out.

    The NY times has a vested interest in discrediting AI, specifically LLMs (what they seem to be referring to) since journalism is a huge target here since it’s pretty easy to get LLMs to generate believable articles. So how I break down this article:

    1. Lean on Betterridge’s law of headlines to cast doubt about the long term prospects of LLMs
    2. Further the doubt by pointing out people don’t trust them
    3. Present them as a credible threat later in the article
    4. Juxtapose LLMs and cryptocurrencies while technically dismissing such a link (then why bring it up?)
    5. Leave the conclusion up to the reader

    I learned nothing new about current or long term LLM viability other than a vague “they took our jerbs!” emotional jab.

    AI is here to stay, and it’ll continue getting better. We’ll adapt to how it changes things, hopefully as fast or faster than it eliminates jobs.

    Or maybe my tinfoil hat is on too tight.

    • sudo42@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      The NY times has a vested interest in discrediting AI, specifically LLMs (what they seem to be referring to) since journalism is a huge target here since it’s pretty easy to get LLMs to generate believable articles.

      The writers and editors may be against AI, but I’m betting the owners of the NYT would LOVE to have an AI that would simply re-phrase “news” (ahem) “borrowed” from other sources. The second upper management thinks this is possible, the humans will be out on their collective ears.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        6 months ago

        I’m betting the owners of the NYT would LOVE to have an AI that would simply re-phrase “news” (ahem) “borrowed” from other sources

        No way. NYT depends on their ability to produce high quality exclusive content that you can’t access anywhere else.

        In your hypothetical future, NYT’s content would be mediocre and no better than a million other news services. There’s no profit in that future.

    • QuadratureSurfer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      edit-2
      6 months ago

      This would actually explain a lot of the negative AI sentiment I’ve seen that’s suddenly going around.

      Some YouTubers have hopped on the bandwagon as well. There was a video posted the other day where a guy attempted to discredit AI companies overall by saying their technology is faked. A lot of users were agreeing with him.

      He then proceeded to point out stories about how Copilot/ChatGPT output information that was very similar to a particular travel website. He also pointed out how Amazon Fresh stores required a large number of outsourced workers to verify shopping cart totals (implying that there was no AI model at all and not understanding that you need workers like this to actually retrain/fine-tune a model).

  • Melkath@kbin.social
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    6 months ago

    It is already showing really great potential.

    Then the news drops that all of the progress we made on global warming has been undone by the energy usage caused by AI.

    So sure, AI will live up the hype, and we will still die a slow and agonizing death of heat and suffocation.

    But at least we will have AI friend chat bots to comfort us through the end.

    • slurpyslop@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      6 months ago

      tbh if ai ever reaches its full potential, it will probably be responsible for massively reversing global warming, because when the entire population is unemployed, they won’t be able to afford to travel, heat their homes, buy products, or eat, all four of which are key contributors to emissions

      • Melkath@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        I’d say the masses dying of starvation and exposure would reduce global warming, but the millions of cars will be replaced by hundreds of private jets that net greater emissions.

        The poor factory workers will be more destitute, and the rich will always find a way fill that void. Fuck, they will force their pilot to fly the empty jet around the world just to make sure that conditions on earth dont improve and they eventually get to utilize their vault 3 miles underground before they die.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      6 months ago

      That doesn’t even make sense. I have the mild suspicion that the fossil fuel industry sponsors nonsense like that, as a distraction from sane measures.

      What we need to do to stop global warming is very simple: Stop using fossil fuels. We must not add CO2 to the atmosphere.

      AI has nothing to do with that. It’s just one more use for electricity. If we wanted to stop global warming, we would get the electricity by saving elsewhere, or generating more carbon-neutral electricity, with solar, wind or what not. We simply chose not to do that.

      • Melkath@kbin.social
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        6 months ago

        CO2 doesn’t only come from fossil fuels. It comes from combustion in general.

        We can go nuclear, but look at how Russia has ruined its entire country trying to do that (if you are not aware of how severe the radiation problem in all of Russia, not just Chernobyl, has become, there are tons of youtubers that do documentary style content, Plainly Difficult is one of my favorites).

        Solar, wind, hydro can do it, but the amount of CO2 produced by manufacturing the generators is still massive. Its just producing the CO2 upstream in the process instead of during the actual power generation. It would take so many solar panels and windmills to replace burning coal that producing them would still release an amount of greenhouse gasses that rivals just burning the coal.

        I don’t disagree that we can make try moves to mitigate the damage, but giant red flags went out about Crypto mining. The power draw from AI is far surpassing that, and AI has hardly even started to spin up yet.

        I hope for the day we figure out how to produce unlimited energy without destroying the atmosphere in the process, but its Newton’s Third Law. Every action has an equal and opposite reaction. Each “solution” comes with its drawbacks, but our thirst for electricity only ever grows.

        The answer would be to make AI draw less power, not to create more power in different ways.

        There is no way we are going to get CEOs to scale back their AI power draw when it gives them the ability to scan everyone’s face and spy on them in a comprehensive, existential way. They are already using it on Anti-Zionist protesters. They are never going to give that kind of power up, but that kind of power requires an insane amount of… power.

        • abhibeckert@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 months ago

          Solar, wind, hydro can do it, but the amount of CO2 produced by manufacturing the generators is still massive

          That’s FUD.

          Sure - the concrete in a large hydro dam requires a staggering amount of electricity to produce (because the chemical reaction to produce cement needs insane amounts of heat), but there’s no reason any CO2 needs to be emitted. You can absolutely use zero emission power to high temperatures needed to produce cement.

          And not all hydro needs a massive concrete wall. There’s a hydro station near my city that doesn’t have a dam at all - it’s just a series of pipes that run from the top of a mountain to the bottom of a mountain. There’s a permanent medium sized river that never stops flowing that comes down off the mountain - with an elevation change of several hundred metres. It provides more power than the entire city’s consumption and does so while only diverting a tiny percentage of the river’s water. As the city grows, the power plant can easily be upgraded to divert more of the water though pipes instead of flowing uselessly down towards the sea.

          Covid and Russia’s war created massive fluctuations recently but if you look through that noise global CO2 emissions are pretty much flat and have been for a few years now. They are almost certainly going to trend downwards going forward (a lot of countries already are seeing downward movement).

          The simple reality is fossil fuels are now too expensive to be competitive. Why would anyone power an AI (or mine crypto) with coal power that costs $4,074/kW when you could use Solar at $1,300/kW (during the day. At night it’s more like $1,700 to $2,000 with the best storage options, such as batteries or pumped storage). Or wind at around $1,700.

          Nuclear is $8,000/kW unless you live in Russia, where safety is largely ignored.

          Hydro can be cheap if you happen to be near an ideal river - but for most locations it’s not competitive with Solar/Wind. So hydro is safe as a long term power generation method into the future, but it’s never going to be the dominant form of power unless (like my city) you happen to have ideal geology.

  • Rottcodd@kbin.social
    link
    fedilink
    arrow-up
    5
    arrow-down
    3
    ·
    6 months ago

    Axiomatically, no, since it isn’t even AI in any meaningful sense of the term, so it fails to live up to its hype right out the gate.

  • veee@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    6 months ago

    The benefits to learning math and science look pretty promising.

    • best_username_ever@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Not for junior programmers around me. They use ChatGPT and then cannot tell me what “they” wrote or why it’s wrong. They will learn nothing and I suspect it’s the same for every thing that requires some thinking and fixing your own mistakes.

      As a senior I don’t care, but I pity them.

      • veee@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        6 months ago

        Right, that’s just plagiarism.

        I’m talking about the recent demos using AI to teach you subject matter via conversation. Seems like info retention could be higher.

  • Pxtl@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    6 months ago

    Absolutely.

    Bing Chat Assistant is better than Google, Bing search, or DDG today. If I search for “how do I do X in software Y” on a normal search, I get zillions of dead-link-filled MS pages, some interesting tangentially-related stackoverflow posts, and a bunch of old blogspam.

    If I ask the robot, I often get “no, there’s no supported way to do that officially” which is the clear clean answer I can’t find elsewhere. Or sometimes it misunderstands the question and gives me a tangentially-related result, which is bad but is the same thing I get from Google via StackOverflow, except Bing is much more responsive to me saying “no, I didn’t mean that way, I meant this” in which case I often get either the right answer or the “no” answer, which is still good and accurate! The problem is as you iterate, the conversation accumulates cruft and becomes more erratic and hallucinatory.

    But right now, with the level of SEO that has ruined all major search engines (ironically partially caused by AI), Bing Chat is the best search on the market now imho. <homer>The cause of and solution to all of life’s problems </homer>

    So yeah, in terms of “things where AI has lived up to its potential”? It is winning the search war today. Everything else is something on the horizon in various distances (art, music, text generation, true general AI) but better search for information is here right now.