I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?

  • SpaceNoodle@lemmy.world
    link
    fedilink
    arrow-up
    73
    arrow-down
    3
    ·
    15 days ago

    Investors are dumb. It’s a hot new tech that looks convincing (since LLMs are designed specifically to appear correct, not be correct), so anything with that buzzword gets a ton of money thrown at it. The same phenomenon has occurred with blockchain, big data, even the World Wide Web. After each bubble bursts, some residue remains that actually might have some value.

    • Kintarian@lemmy.worldOP
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      15 days ago

      I can see that. That guy over there has the new shiny toy. I want a new shiny toy. Give me a new shiny toy.

  • gedaliyah@lemmy.world
    link
    fedilink
    arrow-up
    35
    arrow-down
    1
    ·
    15 days ago

    Generative AI has allowed us to do some things that we could not do before. A lot of people very foolishly took that to mean it would let us do everything we couldn’t do before.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    16
    arrow-down
    2
    ·
    15 days ago

    Rich assholes have spent a ton of money on it and they need to manufacture reasons why that wasn’t a waste.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    6
    ·
    edit-2
    15 days ago

    A dumb person thinks AI is really smart, because they just listen to anyone that answers confidentially

    And no matter what, AI is going to give its answer like it’s is 100% definitely the truth.

    That’s why there’s such a large crossover with AI and crypto, the same people fall for everything.

    There’s new supporting evidence for Penrose’s theory that natural intelligence involves just an absolute shit ton of quantum interactions, because we just found out how the body can create an environment where quantom super position can not only be achieved, but incredibly simply.

    AI got a boost because we didn’t really (still dont) understand consciousness. Tech bro’s convinced investors that neurons were what mattered, and made predictions for when that amount of neurons can be simulated.

    But if it include billions of molecules in quantum superposition, we’re not getting there in our lifetimes. But there’s a lot of money sunk in to it already, so there’s a lot of money to lose if people suddenly get realistic about what it takes to make a real artificial intelligence.

    • OpenStars@discuss.online
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      15 days ago

      That’s why there’s such a large crossover with AI and crypto, the same people fall for everything.

      There’s a large overlap, but some people that did not fall for crypto may fall for AI.

      Always never not be hustling, I suppose.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        15 days ago

        The microtubules creating an environment that can sustain quantum super position just came out like a month ago.

        In all honesty the tech bros probably don’t even know yet, or understands it means human level AI speculation has essentially been disproven as happening anytime remotely soon.

        But I’m assuming when they do, they’ll just ignore it and double down to maintain share prices.

        It’s also possible it all crashes and billions of dollars disappear.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          15 days ago

          Microtubules have been pushed for decades without any proof. The latest paper wasn’t evidence but unsupported speculation.

          But more importantly the physics of computation that creates intelligence has absolutely nothing to do with understanding intelligence. Even if quantum effects are relevant ( which is extremely unlikely given the warm and moving environment inside the brain), it doesn’t answer anything about how humans are intelligent.

          Penrose used Quantum Mechanics as a “God in the Gaps” explanation. That worked 40 years ago but today we have working quantum computers but no human intelligence.

  • xia@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    10
    ·
    15 days ago

    The natural general hype is not new… I even see it in 1970’s scifi. It’s like once something pierced the long-thought-impossible turing test, decades of hype pressure suddenly and freely flowed.

    There is also an unnatural hype (that with one breakthrough will come another) and that the next one might yield a technocratic singularity to the first-mover: money, market dominance, and control.

    Which brings the tertiary effect (closer to your question)… companies are so quickly and blindly eating so many billions of dollars of first-mover costs that the corporate copium wants to believe there will be a return (or at least cost defrayal)… so you get a bunch of shitty AI products, and pressure towards them.

      • xia@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        15 days ago

        I’m not talking about one-offs and the assessment noise floor, more like: “ChatGPT broke the Turing test” (as is claimed). It used to be something we tried to attain, and now we don’t even bother trying to make GPT seem human… we actually train them to say otherwise lest people forget. We figuratively pole-vaulted over the turing test and are now on the other side of it, as if it was a point on a timeline instead of an academic procedure.

  • Feathercrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    15 days ago

    Disclaimer: I’m going to ignore all moral questions here

    Because it represents a potentially large leap in the types of problems we can solve with computers. Previously the only comparable tool we had to solve problems were algorithms, which are fast, well-defined, and repeatable, but cannot deal with arbitrary or fuzzy inputs in a meaningful way. AI excels at dealing with fuzzy inputs (including natural language, which was a huge barrier previously), at the expense of speed and reliability. It’s basically an entire missing half to our toolkit.

    Be careful not to conflate AI in general with LLMs. AI is usually implemented as Machine Learning, which is a method of fitting an output to training data. LLMs are a specific instance of this that are trained on language (hence, large language models). I suspect that if AI becomes more widely adopted, most users will be interacting with LLMs like you are now, but most of the business benefit would come from classifiers that have a more restricted input/output space. As an example, you could use ML to train an AI that can be used to detect potentially suspicious bank transactions. The more data you have to sort through, the better AI can learn from it*, so I suspect the companies that have been collecting terabytes of data will start using AI to try to analyze it. I’m curious if that will be effective.

    *technically it depends a lot on the training parameters

    • Kintarian@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      15 days ago

      I suppose it depends on the data you’re using it for. I can see a computer looking through stacks data in no time.

  • Buglefingers@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    15 days ago

    IIRC When ChatGPT was first announced I believe the hype was because it was the first real usable interface a layman could interact with using normal language and have an intelligible response from the software. Normally to talk with computers we use their language (programming) but this allowed plain language speakers to interact and get it to do things with simple language in a more pervasive way than something like Siri for instance.

    This then got over hyped and over promised to people with dollars in their eyes at the thought of large savings from labor reduction and capabilities far greater than it had. They were sold a product that has no real “product” as it’s something most people would prefer to interact with on their own terms when needed, like any tool. That’s really hard to sell and make people believe they need it. So they doubled down with the promise it would be so much better down the road. And, having spent an ungodly amount into it already, they have that sunken cost fallacy and keep doubling down.

    This is my personal take and understanding of what’s happening. Though there’s probably more nuances, like staying ahead of the competition that also fell for the same promises.

  • Ænima@lemm.ee
    link
    fedilink
    arrow-up
    7
    ·
    15 days ago

    It amazed people when it first launched and capitalists took that to mean replace all their jobs with AI. Where we wanted AI to make shit jobs easier, they used it to replace whole swaths of talent across the industry’s. Recent movies read like they were written almost entirely by AI. Like when Cartman was a robot and kept giving out terrible movie ideas.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    15 days ago

    Like was said: money.

    In addition, they need training data. Both conversations and raw material. Shoving “AI” into everything whether you want it or not gives them the real world conversational data to train on. If you feed it any documents, etc it’s also sucking that up for the raw data to train on.

    Ultimately the best we can do is ignore it and refuse to use it or feed it garbage data so it chokes on its own excrement.

  • empireOfLove2@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    15 days ago

    They were pretty cool when they first blew up. Getting them to generate semi useful information wasn’t hard and anything hard factual they would usually avoid answering or defer.

    They’ve legitimately gotten worse over time. As user volume has gone up necessitating faster, shallower model responses, and further training on Internet content has resulted in model degradation as it trains on its own output, the models gradually begin to break. They’ve also been pushed harder than they were meant to, to show “improvement” to investors demanding more accurate human like fact responses.

    At this point it’s a race to the bottom on a poorly understood technology. Every money sucking corporation latched on to LLM’s like a piglet finding a teat, thinking it was going to be their golden goose to finally eliminate those stupid whiny expensive workers that always ask for annoying unprofitable things like “paid time off” and “healthcare”. In reality they’ve been sold a bill of goods by Sam Altman and the rest of the tech bros currently raking in a few extra hundred billion dollars.

    • Kintarian@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      15 days ago

      Now it’s degrading even faster as AI scrapes from AI in a technological circle jerk.

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        Yes, that’s what they said. I’m starting to think you came here with a particular agenda to push, and I don’t think that’s very polite.

        • Kintarian@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          ·
          15 days ago

          The person who said AI is neither artificial nor intelligent was Kate Crawford. Every source I try to find is paywalled.

              • Feathercrown@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                14 days ago

                I find that a lot of discourse around AI is… “off”. Sensationalized, or simplified, or emotionally charged, or illogical, or simply based on a misunderstanding of how it actually works. I wish I had a rule of thumb to give you about what you can and can’t trust, but honestly I don’t have a good one; the best thing you can do is learn about how the technology actually works, and what it can and can’t do.

                • Kintarian@lemmy.worldOP
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  14 days ago

                  For a while Google said they would revolutionize search with artificial intelligence. That hasn’t been my experience. Someone here mentioned working on the creative side instead. And that seems to be working out better for me.

        • Kintarian@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          15 days ago

          Look it up. Also, they were pushing AI for web searches and I have not had good luck with that. However, I created a document with it yesterday and it came out really good. Someone said to try the creative side and so far, so good.

          • Feathercrown@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 days ago

            Look it up

            I know what model collapse is, it’s a fairly well-documented problem that we’re starting to run into. You’re not wrong, it’s just that the person you replied to was agreeing about this.

            Someone said to try the creative side and so far, so good.

            Nice! I’m glad you were able to find something useful to use it for.

  • bionicjoey@lemmy.ca
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    15 days ago

    A lot of jobs are bullshit. Generative AI is good at generating bullshit. This led to a perception that AI could be used in place of humans. But unfortunately, curating that bullshit enough to produce any value for a company still requires a person, so the AI doesn’t add much value. The bullshit AI generates needs some kind of oversight.

  • Tylerdurdon@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    15 days ago
    • automation by companies so they can "streamline"their workforces.

    • innovation by “teaching” it enough to solve bigger problems (cancer, climate, etc).

    • creating a sentient species that is the next evolution of life and watching it systematically eradicate every last human to save the planet.

    • Kintarian@lemmy.worldOP
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      15 days ago

      It’s easier for the marketing department. According to an article, it’s neither artificial nor intelligent.

        • Kintarian@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          ·
          15 days ago

          Artificial intelligence (AI) is not artificial in the sense that it is not fake or counterfeit, but rather a human-created form of intelligence. AI is a real and tangible technology that uses algorithms and data to simulate human-like cognitive processes.

            • Kintarian@lemmy.worldOP
              link
              fedilink
              arrow-up
              1
              ·
              14 days ago

              Well, using the definition that artificial means man made then no. Human intelligence wasn’t made by humans therefore it isn’t artificial.

              • canadaduane@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 day ago

                I wonder if some of our intelligence is artificial. Being able to drive directly to any destination, for example, with a simple cell-phone lookup. Reading lifetimes worth of experience in books that doesn’t naturally come at birth. Learning incredibly complex languages that are inherited not by genes, but by environment–and, depending on the language, being able to distinguish different colors.

                • Kintarian@lemmy.worldOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 day ago

                  From the day I was born, my environment shaped what I thought and felt. Entering the school system I was indoctrinated into whatever society I was born to. All of the things that I think I know are shaped by someone else. I read a book and I regurgitate its contents to other people. I read a post online and I start pretending that it’s the truth when I don’t actually know. How often do humans actually have an original thought? Most of the time we’re just regurgitating things that we’ve experienced, read, or heard from exteral foces rather than coming up with thoughts on our own.

    • 5gruel@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      14 days ago

      When will people finally stop parroting this sentence? It completely misses the point and answers nothing.

      • Kanda@reddthat.com
        link
        fedilink
        arrow-up
        1
        ·
        13 days ago

        Where’s the intelligence in suggesting glue in pizza? Or is it just copying random stuff and guessing what comes next like a huge phone keyboard app?