To be honest, I think we’re losing credibility. I don’t know what else to put in the description.

  • Asafum@lemmy.world
    link
    fedilink
    arrow-up
    34
    ·
    edit-2
    2 months ago

    It’s making hobbyist computing expensive, it’s potentially eliminating some of the few actually enjoyable jobs (art, creative works), it’s making websites and applications less secure with vibe coding, and it’s allowing for even more convincing propaganda/bad faith actors to manipulate entire populations…

    But hey, at least Elon Musk gets to make naked pictures of kids and still be a billionaire. So there’s that.

  • Blaster M@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    AI is a tool, it can be used for good, and it can be used for bad. Right now, the business world is trying to find ways to make it work for the business world - I expect 95 percent of these efforts to die off.

    My preference and interest is in the local models - smaller, more specialized models that can be run on normal computers. They can do a lot of what the big ones do, without being a cloud service that hrvests data.

  • da Tweaker@feddit.org
    link
    fedilink
    arrow-up
    8
    ·
    2 months ago

    I liked ai at the start, and as a concept. But now i hate it due to its implementation, and usage. So ai shouldnt be anywhere.

  • NegentropicBoy@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 months ago

    We are still figuring out what the current crop of LLMs are useful for, and we have many more innovations to look forward to.

  • pulsewidth@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    2 months ago

    My thoughts are that the USA is in a far worse position now to shoulder and recover from the coming bubble pop, crash, and financial crisis that the mass implementation of AI is about to cause than they were in 2008-2009 when the last crash hit.

    Could be wrong though. Maybe all the datacenters will get built on time, and be powered by a sudden breakthrough in nuclear fission, and maybe ~44% of people on the earth will sign up to paid plans with OpenAI so that they can become profitable.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    2 months ago

    As much as people on the Fediverse or Reddit or whatever other social media bubble we might be in like to insist “nobody wants this” or that AI is useless, it actually is useful and a lot of people do want it. I’m already starting to see the hard-line AI hate softening, more people are going “well maybe this application of AI is okay.” This will increase as AI becomes more useful and ubiquitous.

    There’s likely a lot of AI companies and products starting up right now that aren’t going to make it. That’s normal when there’s a brand new technology, nobody knows what the “winning” applications are going to be yet so they’re throwing investment at everything to see what sticks. Some stuff will indeed stick, AI isn’t going to go away. Like how the Internet stuck around after the Dot Com bust cleared out the chaff. But I’d be rather careful about what I invest in myself.

    I’m not a fan of big centralized services and subscriptions, which unfortunately a lot of the American AI companies are driving for. But fortunately an unlikely champion of AI freedom has arisen in the form of… China? Of all places. They’ve been putting out a lot of really great open-weight models, focusing hard on getting them to train and run well on more modest hardware, and releasing the research behind it all as well. Partly that’s because they’re a lot more compute-starved than Western companies and have no choice but to do it that way, but partly just to stick their thumb in those companies’ eyes and prevent them from establishing dominance. I know it’s self-interest, of course. Everything is self-interest. But I’ll take it because it’s good for my interests too.

    As for how far the technology improves? Hard to say. But I’ve been paying attention to the cutting edge models coming out, and general adoption is still way behind what those things are capable of. So even if models abruptly stopped improving tomorrow there’s still years of new developments that’ll roll out just from making full use of what we’ve got now. Interesting times ahead.

  • UnspecificGravity@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    2 months ago

    We’ve passed peak innovation in American tech so we have to pretend that this is a product we want lest we realize that none of our shit had gotten any better for the last fifteen years.

    • Perspectivist@feddit.uk
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      2 months ago

      we have to pretend that this is a product we want

      ChatGPT alone has 800 million weekly users.

            • Perspectivist@feddit.uk
              link
              fedilink
              arrow-up
              3
              ·
              2 months ago

              First you moved the goalposts by pivoting from “there’s no want for LLMs” to “okay but how many are paying.” You quietly shifted the entire criteria of “want” from voluntary demand to monetization the second evidence of massive adoption showed up.

              When I pointed out that VLC has hundreds of millions of users who also don’t pay, you tossed in the irrelevant “it’s open source by one person” line - which is a complete non sequitur. Development model or monetization status has zero logical bearing on whether 800 million weekly ChatGPT users demonstrate real desire for LLMs.

              This is classic bad-faith argumentation: throw in red herrings, change the standard whenever your position weakens, and misrepresent what was actually said to avoid engaging with the actual evidence.

  • ZoteTheMighty@lemmy.zip
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    A tool that doesn’t work, that fails to solve a problem we don’t have, while consuming all of our resources. What’s not to love?

  • archonet@lemy.lol
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    2 months ago

    LLMs: flawed tools with potential but unfortunately vastly overhyped confidence in their abilities.

    Audiovisual AI – deepfakes, AI generated art and music, AI facial and whole-body recognition, etc: no. Just no. Nothing good has, will, or can come of this, I’m quite certain.

  • Ashtear@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Mass implementation is a mistake, and I suspect implementation in consumer goods is where the bubble’s bursting will be the most devastating. Recent news tells us Dell has figured that out already, and I don’t think it will be long before society decides it can’t tolerate things like AI companions for young children.

    Ultimately, I don’t think widespread implementation with any sort of value will be possible unless someone figures out how to make effective prompt creation so easy anyone can do it. Everyone seems to think AI is just a box you press a button on and it’ll spit something out, but getting valuable output isn’t like that. Good prompt engineering and tool selection is hard, and it’ll have to be a trained skill for people working with the systems generative AI does stick around on.

    The really unfortunate thing is LLMs are the perfect snake oil for sociopathic executives. They can provide something approximating meaningful human interaction to these lonely, workaholic MBAs, and from there, it wasn’t a hard sell to make them believe they could replace their pesky labor force, too. When you’re that far outside the real world, sycophantic illusions are seductive.

  • myfunnyaccountname@lemmy.zip
    link
    fedilink
    arrow-up
    3
    ·
    2 months ago

    It’s a tool. It has uses. But do I need it in every single thing. No. Especially when 90% of the time the AI features are half baked and crammed down your throat. Like windows and copilot.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    The technology they’ve come up with sort of works, some of the time, and can make for an impressive demo if you ignore its failings. If you suspend all disbelief and assume that because computers have learned this one new trick they’ll soon be smart enough to magically transform themselves into hyperintelligent AGI monsters straight out of science fiction, if you learn to really believe it, you can convince a lot of people that you might be right. Nobody can prove that it won’t happen, therefore it’s inevitable. Therefore it is existentially important for the future of humanity and it only makes sense to bet the entire economy on it right away without hesitation.