• minorkeys@lemmy.world
    link
    fedilink
    arrow-up
    156
    arrow-down
    6
    ·
    edit-2
    1 month ago

    The public fundamentally misunderstands this tech because salesman lied to them. An LLM is not AI. It just says the most likely thing based off what is most common in its training data for that scenario. It can’t do math or problem solve. It can only tell you what the most likely answer would be. It can’t do function things. It’s like Family Feud where it says what the most people surveyed said.

    • Clent@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      78
      ·
      1 month ago

      Some of them will “do math” but not with the LLM predictor, they have a math engine and the predictor decides when to use it. What’s great is when it outputs results, it’s not clear if it engaged the math engine or just guessed.

      • hikaru755@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        1 month ago

        when it outputs results, it’s not clear if it engaged the math engine or just guessed

        That depends on the harness though. In the plain model output it will be clear if a tool call happened, and it depends on the application UI around it whether that’s directly shown to the user, or if you only see the LLM’s final response based on it.

    • 1D10@lemmy.world
      link
      fedilink
      arrow-up
      26
      ·
      1 month ago

      I explain it as asking 100 people to Google something and taking the most common answer.

        • 1D10@lemmy.world
          link
          fedilink
          arrow-up
          19
          ·
          1 month ago

          Yep but instead of “name something a woman keeps in her purse” it’s “write my legal document” or “is it ok to lick a lamp socket”

          • felbane@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            1 month ago

            Great question! The answer to all three of your queries is “yes.” Would you like me to search for the nearest lamp socket?

    • SorryQuick@lemmy.ca
      link
      fedilink
      arrow-up
      4
      arrow-down
      4
      ·
      1 month ago

      Is a human much different? We too require tons of training and we too are prone to stupid mistakes.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        Fundamentally yes and no. Original commentor could’ve saved his breath, if people wanted to be educated on AI they have plenty of resources to do so but instead they choose to remain ill informed. The difference is that humans are capable of critical thinking and conceptual connection. We are just as prone to mistakes as AI, we just have a much higher apptitude for mistakes lol. Hence the goal not being to make a perfect AI, its a much more achievable goal of making AI’s that beat us in specific fields. Then to beat us in all fields.

        • SorryQuick@lemmy.ca
          link
          fedilink
          arrow-up
          0
          arrow-down
          2
          ·
          edit-2
          1 month ago

          It’s missing features obviously (think neuroplasticity) but is that how AI differs from human intelligence, or simply a lack in the current generation?

          • Scubus@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            1 month ago

            It seems to be a flaw in both the hardware and software side of things. Hardware wise, we have yet to make chips that achieve the processing density of human brain matter. Also, heat generation becomes an issue as you try to scale smaller systems up. Software wise, we know our current neural networks dont scale up well, so we seem to be waiting on some more foundational research for more efficient algorithms. My suspicion is that we’re not really going to get true General Superintelligence until we start manufacturing chips that incorporate living neurons, it just really seems cheaper to use already existing computing systems than to design your own architecture.

  • Ganbat@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    1 month ago

    Okay, so, in case the headline is confusing anyone else, it’s literal. Like, you know how there are those cringe-ass Alexa ads that are about how it does AI language processing and assistant shit? Yeah, ChatGPT can’t I guess.

    • pyre@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      27 days ago

      almost like it’s useful to have purpose built tools rather than feeding the entire web to an autocomplete algorithm and letting it go nuts

  • MousePotatoDoesStuff@lemmy.world
    link
    fedilink
    arrow-up
    34
    ·
    1 month ago

    Even if it could, it would be an order of magnitude more inefficient in terms of convenience than the stopwatch we already have on our phones.

    “Hey ChatGPT, do the thing I could have done in 3-4 clicks on my clock app.”

    Not to mention the sheer wastefulness in terms of energy. A MINECRAFT REDSTONE MACHINE TIMER WOULD BE MORE EFFICIENT. (Not to mention that, unlike SOTA LLMs, it can run offline on a phone)

    • FuglyDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 month ago

      minecraft is turing-complete, so, like, you can do a whole lot more than just be a timer.

      • MousePotatoDoesStuff@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 month ago

        Absolutely. I was thinking of getting back into minecraft Redstone but I’d rather do it in a non-Microsoft alternative. Not to mention at least a dozen other projects on my backlog

        • FuglyDuck@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 month ago

          Yeah, I’d be playing minecraft, too, especially with my nephew, but uhm. that whole microslop thing ruins it for me.

          We do enjoy space engineers, though. (lots of mining, lots of building. Nephew loves it when Klang accepts our sacrifices.) It’s a bit more involved than minecraft, though.

          (but niftishly, there’s programable blocks that will let you write c# code and… do things.) (space engineers 2 is in early access if you have the hardware.)

          • MousePotatoDoesStuff@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 month ago

            It looks interesting, but I’m looking for something with more… world in it. Something to use my logic circuits with (count storage items, simple store, item mail system…)

            • supersquirrel@sopuli.xyz
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              1 month ago

              An open-source voxel game creation platform. Play one of our many games solo or together. Mod a game as you see fit, or make your own.

              https://www.luanti.org/en/

              VoxeLibre is a survival sandbox game for Luanti. Survive, gather, hunt, mine for ores, build, explore, and do much more. Inspired by Minecraft, pushing beyond.

              https://content.luanti.org/packages/wuzzy/mineclone2/

              Mesecons! They’re yellow, they’re conductive, and they’ll add a whole new dimension to Luanti’s gameplay.

              Mesecons implements a ton of items related to digital circuitry, such as wires, buttons, lights, and even programmable controllers. Among other things, there are also pistons, solar panels, pressure plates, and note blocks.

              Mesecons has a similar goal to Redstone in Minecraft, but works in its own way, with different rules and mechanics.

              https://content.luanti.org/packages/Jeija/mesecons/

    • Jhex@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 month ago

      You are correct but I think you are missing the point.

      Remember, from the perspective of all AI companies (OpenAi probably more than most), AI is this monster tech that will surely replace all workers and even your Grandma as it can bake better cookies.

      This is yet another display of how lacking AI is in a simple, everyday task… but more importantly, it is a gigantic demonstration of how AI is completely blind to its own weaknesses which is what makes it really really dangerous when used as prescribed by OpenAi and the others

      This situation is basically the same as when the brand new $700 iPhones (back when that was eye watering expensive for a phone) could not run an alarm in the mornings and Apple’s answer was something like “why are you using your Cadillac phone as a cheap alarm?”… it should fucking wake me up with a massage for that cost!

  • robocall@lemmy.world
    link
    fedilink
    arrow-up
    33
    ·
    1 month ago

    He’s going to ask US Congress for a bailout with taxpayers money when this all fails and Congress is going to most likely give it to him because this one company is a huge part of the US economy

    • frank@sopuli.xyz
      link
      fedilink
      arrow-up
      15
      ·
      1 month ago

      I don’t think so, and I’m on the Ed Zitron train of thought why not.

      The financial instruments got a bailout in 08, because the economy itself would stop functioning. That’s different than the stocks would drop. Also, there’s like nothing to bail out? OpenAI and their ilk are just sucking down capital and returning nothing. Even if they get one bailout, they need a continuous stream of unlimited money forever? I don’t think it’ll happen.

      I hope I’m right, cuz damn that shit is cancerous

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        If Trump is still in charge when the bubble pops, he’ll do everything he can to bail them out. Altman knows how to flatter people, and he’s doing that constantly with Trump. A significant part of Trump’s base is silicon valley techbros who will lose their shirts if the bubble collapses. They had enough sway to get their guy installed as the VP. Getting a bailout will be easy for them. If they get poor, they won’t be able to fund the MAGA movement.

        Even if Trump isn’t in charge anymore. Businesses that have fired a lot of employees and replaced what they did with LLM slop will say their businesses will be ruined if the bubble suddenly pops, so they’ll frame it as the economy collapsing if the LLM bubble is allowed to pop. Not to mention they’ll claim it’s a national security matter because if American LLMs disappear the only ones left will be Chinese ones, and that would be a threat to national security. The fact that the military is extensively using LLMs in their bombing of Iran shows how integrated they now are into the way the military does things, and you can’t ask the military to just go back to how things were done 5 years ago!

        I expect that when the LLM bubble starts to pop, there will be enormous bailouts from the government, adding tens of trillions to the US debt. That’s a long-term thing and will be someone else’s problem.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        1 month ago

        I think a potential OpenAI “bailout” should go something like this:

        • The investors get their money back.
        • They have to sign a pact that they must not invest into AI anymore for a given amount of years (20+ minimum).
        • Massive regulatory overhaul to make sure stuff like this never happen. Also undo Ford v. Dodge Brothers.
        • Scam Altman and the others go to life in prison.
        • jmill@lemmy.zip
          link
          fedilink
          arrow-up
          14
          ·
          1 month ago

          … why should the investors get their money back? They invested ludicrous amounts of money into a technology with obvious limitations from the start with the intention of using that technology to replace many people’s jobs. Losing that money will be a better lesson than some probably unenforceable “pact”.

        • leftzero@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 month ago

          They have to sign a pact that they must not invest into AI anymore for a given amount of years (20+ minimum).

          Problem is this might hurt actual AI research to punish a scam that has absolutely nothing to do with AI other than having coopted the name for marketing purposes.

          (Any investment in actual AI research is doomed for decades anyway when this bubble pops, but this would cause even more harm than the bubble has already caused.)

          (Also any form of research is probably ruined for decades anyway due to LLM-induced brain rot and having to sift through all the slop to try to recover any remaining fragments of actually useable knowledge, but, again, let’s not make it even worse than it already is).

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      And because they can frame it as being a national security issue because otherwise China’s LLMs will dominate.

  • jobbies@lemmy.zip
    link
    fedilink
    arrow-up
    34
    arrow-down
    2
    ·
    1 month ago

    Makes me so angry. All the problems that couldve been solved with that kinda money. Climate crisis. World hunger. Population migration. Housing affordability.

    If Trump triggered WW3 and we all got nuked id be fine with it. We don’t deserve to exist.

    • skuzz@discuss.tchncs.de
      link
      fedilink
      arrow-up
      9
      ·
      1 month ago

      Instead, all that money is being used to accelerate our doom. AI datacenters unnecessarily consuming power and drinking water in small towns everywhere. Many just dumping humidity into the air and letting that water literally blow away via lazy evaporative cooling. Most “normal” water consuming processes consume, treat, and return water to the downstream-traveling aquifer.

      Now, couple that with an overall warming climate. When air is warmer, the more moisture the air can hold. So we end up with more water vapor in the air than normal. With the weirding factor of climate change, this means more water energy for more powerful and destructive storms the likes of which humanity has never seen. Which feeds back into more ice melting, oceans rising, permafrost melting, cycle, accelerate, cycle, accelerate.

      Also, real curious to see how millions of warehouses belching humidity and heat into the air across the surface of the globe can affect the general weather patterns, but that sadly won’t be known until after the damage is done.

    • numberskull@lemmy.zip
      link
      fedilink
      arrow-up
      9
      ·
      1 month ago

      There was an ad during the Super Bowl that succinctly sums up how I feel right now: “America deserves Pepsi”

      • jobbies@lemmy.zip
        link
        fedilink
        arrow-up
        11
        ·
        1 month ago

        The climate crisis couldn’t be solved with such a small sum of money

        852 billion isn’t ‘a small sum of money’. And thats just OpenAI - add to it what Google, MS, Meta etc have spent and I’ll bet you’ll get close.

        addressing world hunger would decimate our economies

        What will decimate the global economy is the correction that will happen when either AI proves to be a folly or investors realise they’ll never recoup what they’ve paid out. Or when theres mass unemployment cos AI has taken all the human jobs. We’re cooked whichever way it goes.

        • Tollana1234567@lemmy.today
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          not taken jobs, but laying people in such massive amounts, and “rehiring” cheaper employees that have been outsourced.

        • BygoneNeutrino@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          edit-2
          1 month ago

          I feel as though people in 3rd world countries will be hit the hardest. Most of the jobs in first world countries can already be done much, much cheaper by foreign workers. Although we could outsource everything, it feels like we give people jobs just to keep people busy.

          In the next two decades, I’m convinced that the countries that can’t afford to invest in AI and automation will be hit the hardest. The value of their work will be reduced, and richer nations will not freely share the technology.

          …it was my point about world hunger. Things are set up to where the global poor have just enough money to by food and shelter. Their low wages are why people in my country have such a high standard of living.

      • humanspiral@lemmy.ca
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 month ago

        climate crisis doesn’t cost anything from an energy perspective. Just freedom of competition. renewable energy can outcompete incumbent energy, and a carbon tax and dividend scheme (0 cost) ensures the full (future) cost of dirty energy is considered.

  • ductTapedWindow@lemmy.zip
    link
    fedilink
    arrow-up
    24
    ·
    1 month ago

    I just used the voice feature in my truck to enter an address for Google maps like always, it came up as Gemini with a long speech. I repeated the address, it asked me if I wanted the location in my home city or one in a city over 400 miles away. Regression with exponential cost.

    • skuzz@discuss.tchncs.de
      link
      fedilink
      arrow-up
      14
      ·
      1 month ago

      And every fake-friendly long-winded response consumes more electricity and water than it should, while also being useless.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      6
      ·
      1 month ago

      God I hate that. “Alexa turn on sleep” was a reliable “turn on sleep scene” until “Alexa Plus” came around, and now it randomly assumes in trying to tell it goodnight and tells me to have a good night.

      Same with “sixty minutes” being immediately parsed as “sixty minute timer” and now sometimes simply results in a “what about sixty minutes?”

      They’ve lowered the success metrics and satisfaction a whole bunch, but don’t fret you can now hold a “conversation” with it! Complete with logical contradictions!

  • yopp@infosec.pub
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    5
    ·
    edit-2
    1 month ago

    This is most unhinged take from both sides.

    Time can’t exist in LLM by design: it’s just a thing that predicts next token based on previous tokens. There is no temporal relation between tokens. You can stop and resume generation at any point. How anyone expect it to “count time”? Based on what? The best you can do is add time mark to model input at some interval.

    Simplifying, somewhat complex biological systems have some kind of clocks that actually chemically tick and induce some kind of signal that they can react on.

    LLMs can’t do that like at all. They never will. Some other architecture that runs in cycles? Maybe. But transformer shit? Never ever.

    • MysticKetchup@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      1 month ago

      The issue is that ChatGPT will tell you that it can do those things. Most of the hype for “AI” has been predicated on treating it like actual artificial intelligence and not the LLM parrot it truly is

    • mrgoosmoos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 month ago

      I don’t think anybody is expecting an LLM to do it

      what they are expecting is the product, chatGPT, to be a one-stop spot that can do basic tasks like that

    • hydroptic@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 month ago

      Some other architecture that runs in cycles?

      Spiking networks!

      Nobody really has a good handle on which SNN architecture would be ideal for which task and they’re ridiculously hard to train, but if there is ever going to be an AGI (let alone an ASI), my money is on it popping out of something like an SNN that can also simulate neuroplasticity

  • lobut@lemmy.ca
    link
    fedilink
    arrow-up
    16
    ·
    1 month ago

    Why’s this need to be on the LLM? They control the app, can’t they just make a tool call out?

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      1 month ago

      Hey, set a timer for 60 seconds.

      ChatGPT analyzes text

      You want a timer for 600 seconds, got it!

      Sets timer for 600 seconds with api.

      • Hisse@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        It’ll actually misinterpret “seconds” as the number 2 instead and then start a timer lengthed 60*2 which is of course 150!

  • sunbeam60@feddit.uk
    link
    fedilink
    arrow-up
    20
    arrow-down
    5
    ·
    edit-2
    1 month ago

    Everyone’s getting their knickers in a twist over nothing here.

    Of course an AI can track time, if it’s given access to a timer MCP server.

    Can we track time without tools, just in our heads? Certainly not very accurately. We can, however, track it reasonably accurately if given access to a quartz stop watch (typically +/-15 s/year)

    A language model is based around language and reasoning by words/symbols. It’s not a surprise it doesn’t have timing capability.

    What Altman SHOULD be embarrassed about is that the model lies about its capabilities. That implies that the context is still not right - it should be adequately trained and given context to prevent the lying. That implies a much more worrying issue - and something that Anthropic handles far better, IMHO (when asked if it can track time, if says “no, not on my own”, and then proceeds to build a JavaScript timer that it offers up to track time).

    • TexasDrunk@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      1 month ago

      I don’t use them but I follow the news about them loosely. The reason for this is epistemic humility. Claude has a pretty good idea of what its capabilities are and where the ceiling is. Chatgpt has no clue what its limits are so it believes it can do everything. Basically chatgpt has a lot of info and no idea where the gaps live and Claude has a fair idea when to search or use some external function to handle something. Gemini has less than Claude but more than chatgpt. Grok has little to no epistemic humility, but it did manage to accurately portray Musk as a world champion piss drinker, something none of the others were able to do.

      I say that, but it’s been a few months since I looked. That could have changed because shit moves fast. By the looks of what it’s trying to do with the timer chatgpt has less than it used to. Possibly because of the way the model is trained to be helpful and confident.

    • 3abas@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 month ago

      It could simply save a timestamp of the “begin timer” message and compare it to the timestamp of the “end” message. It’s not that complicated, and writing a script and executing it is overkill… It just needs access to a calculator skill.

      Yes, it handles it better, but it’s still a dumb approach and waste of energy.

      • sunbeam60@feddit.uk
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        Aren’t we saying exactly the same? Give it an MCP server or a native skill that CAN track time.

        • 3abas@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          Well, yes. I’m just responding to the anthropic doing it better part, yes but not by much.

          ‘const timePassed = (start, end) => end - start;’ is much simpler than creating a whole timer.

    • Jhex@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      well messages are clearly not stateless (otherwise there would be no context), but in general yes the issue is not the lack of capability, it’s the complete unawareness of it and the insistence on lying about it.

      THIS time it is ridiculously obvious but what if it does this after checking a very large data set where there would be no (good) way to verify its answer?

      This is why Ai, in it’s current form, is basically useless. If you cannot trust it NOT to lie, and must/should verify everything yourself, you might as well skip the useless step of asking

      • sunbeam60@feddit.uk
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        To call AI useless is quite a strong statement.

        There’s a million places to use it!

        The problem is that the market thinks there’s a billion places to use. And right now we’re funding 999 million places that shouldn’t be using AI but have the funding to do that dumb thing so we can figure the one million places where it makes fantastic sense.

        • Jhex@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          I get your point and yes, I was exaggerating for effect but… are there a million places to use Ai where you can blindly trust its output/work?

          I do not really think it’s completely useless; however, I do think the uses are very very limited (compared to the hype) and the cost of running these models for the benefit they provide makes them even less practical

  • TheV2@programming.dev
    link
    fedilink
    arrow-up
    14
    ·
    1 month ago

    Shit like this is a reminder to me that a large portion behind some AI products’ hype are people who have no clue what these products even do. I wonder how the world would change, if these jack of all trades who invest waste so much time into collecting ideas to fill up their pockets, instead spent more time on actually understanding the ideas they have chosen and build at least a fundamental knowledge.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      I wonder how the world would change, if these jack of all trades who invest waste so much time into collecting ideas to fill up their pockets, instead spent more time on actually understanding the ideas they have chosen and build at least a fundamental knowledge.

      I am afraid they would be even more dangerous

  • 1984@lemmy.today
    link
    fedilink
    arrow-up
    12
    ·
    1 month ago

    Sam Altman wants funding right?

    Here is an idea. I would pay 1000 dollars to get in a boxing ring with this guy, and probably a lot of other people would love to get a shot at that punchable face, no?

    We have solved funding.

  • Avicenna@programming.dev
    link
    fedilink
    arrow-up
    12
    ·
    1 month ago

    You would already be doing a great service to the world if you produced a really well tuned search engine / information digger with LLMs but no you had to periodically hype it as AGI because it can memorize entire text books with some accuracy. You did this to yourselves and if you fall it will be because of these expectations which are not met.