“I’ve been saving for months to get the Corsair Dominator 64GB CL30 kit,” one beleagured PC builder wrote on Reddit. “It was about $280 when I looked,” said u/RaidriarT, “Fast forward today on PCPartPicker, they want $547 for the same kit? A nearly 100% increase in a couple months?”

  • Aceticon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    18
    ·
    4 hours ago

    Doesn’t Windows 11 in practice require even more memory than Windows 10 to operate with decent performance?

    Meanwhile my Linux gaming PC seems to actually use less memory than back when it was a Windows machine.

    • chiliedogg@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 hour ago

      My work laptop was upgraded to Windows 11 and performance has severely suffered.

      As someone who usually uses 3 monitors (sometimes 4) and does GIS, it’s an issue.

      • LiveLM@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        53 minutes ago

        Are you using scaling?
        On my work laptop dragging a window from one monitor to the next would make them “snag” in between borders as they struggled and stuttered trying to change the scale.
        It looked soooo fucking stupid I couldn’t believe my eyes

        • HugeNerd@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          46 minutes ago

          Many such moments with the PCs at work. I can’t wait to retire and never have to deal with anything modern again. What absolute dogshit and slop computers have become.

  • utopiah@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 hours ago

    Genuine question here, for a “normal” computer user, say somebody who :

    • browses the Web
    • listens to music, play videos, etc
    • sometimes plays video games, even 2025 AAAs and already has a GPU relatively recent and midrange, say something from e.g. 2020
    • even codes something of a normal size, let’s say up to Firefox size (which is huge)

    … which task does require more than say 32Go?

    • Devjavu@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 minutes ago

      If by normal you average, they don’t even really need 16gb.
      Creative work can gobble up ram, heavy ass multitasking does as well.
      So it’s more in the digitally productive professional or hobbyist cases where you need such amounts as a person.

      For development high amounts of rams can be useful for all sorts of stuff, it’s not just compiling, but also testing, though 32 is often enough.

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    10 hours ago

    Far as RAM goes, it will become a good thing: it gives companies incentive to invest into the development of bigger RAM, more speed, and making the motherboard bandwidth big enough to handle it.

    The next big generation of hardware will be much better IMO, simply because the companies will have to compete by their merits. The downside is not having enough supply right now, but once the logistics and tech is in place, even non-AI people will benefit.

    • plyth@feddit.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 hours ago

      The downside is not having enough supply right now, but once the logistics and tech is in place, even non-AI people will benefit.

      Have you forgotten that they agreed to reduce production to stabilize prices? Capacity is not the real bottleneck.

  • lightnsfw@reddthat.com
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    14 hours ago

    Who’s bewildered? Of course this was going to happen. Everything enjoyable about life is being ruined. It’s not surprising at all.

  • Bongles@lemmy.zip
    link
    fedilink
    English
    arrow-up
    23
    ·
    18 hours ago

    This seems like an appropriate place for me to bitch:

    2 months ago I bought a new pre-built pc. It should’ve had 64gb of ram but had 32gb. They said the sticks they used were out of stock so they gave me a credit for $100 USD. I spent the 100 on 32gb more of what I thought was the exact same ram. I fucked up and bought a slightly higher speed so they wouldn’t work together after I tried for an afternoon. I also checked the correct listing i should’ve bought but it was more expensive, at about $125.

    I gave up and decided I’d just buy the faster ram again when it came back, rather than return it and get the correct one. It went out of stock in the time it took me to get my order so I figured I’d just wait.

    2 MONTHS later, it never came back in stock but an almost identical pair, with slightly different timing, is in stock right now at $216. If i had any idea this was coming in just 2 months, I could’ve just bought 64gb at once and started fresh, or corrected my mistake by returning what I bought.

    So i guess I’ll continue waiting, but hey at least notepad has copilot in it.

    • SailorFuzz@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      I just don’t see the value of having 64gb of RAM. Not for the conventional user, not for gamers, not for the average power user either. Maybe there’s a need if you’re doing a lot of video editing and large file manipulation… but like… I would argue that MOST people, unless they’re trying to play AAA games while streaming and gooning don’t need more than 16gb

      I have 32gb and I’ve never topped it out. And yea, Windows eats a lot (I really need to give up the ghost and migrate to Linux) but even still, 32gb, and I don’t even get close. 64gb is just going to be a lot of unused space. Bigger number doesn’t mean better. I doubt you’d even notice unless you fall into the previously mentioned category of users.

    • timhayes1991@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      I always thought ram of different speeds worked together, they just were run at the speed of the slowest stick.

  • Tim_Bisley@piefed.social
    link
    fedilink
    English
    arrow-up
    172
    ·
    1 day ago

    AI increases my power utility bill
    AI takes my water
    AI increases the price of GPUs
    AI increases the price of RAM
    AI makes my search results worse and slower
    AI is inserted into every website, app, program, and service making them all worse

    All so businesses and companies can increase productivity, reduce staff, and then turn around and increase prices to customers.

    • Capricorn_Geriatric@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      All so businesses and companies can increase productivity, reduce staff, and then turn around and increase prices to customers.

      As if. The only thing AI is to businesses is a lost bet. And they don’t like losing. So they’re betting even more, hoping some shiny “AGI” starts existing if they throw enough money into wasting other resources onto the AI bandwagon.

    • Credibly_Human@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      Lets not also forget that for a bunch of them, they want to completely replace lightning fast, simple UI with AI so they don’t need their own programmers, and your experience doing things is outright painful.

  • carrylex@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    21 hours ago

    Nice article but the numbers are a lot lower here in the EU.

    While there is some pricing increase it’s currently more around 50% and not 100%.

    The selected kit is also extremely expensive (350€ was ~300€) - similar kits are available for a lot less (270€ was ~180€) - so I doubt that anyone was buying it in the first place.

    I also think it’s not completely AI related but more likely that this is another RAM price fixing scandal happening right now. Pretty much the same that we see today happend in 2017-2018.

      • 46_and_2@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 hours ago

        Bruh, my whole mid-to-high range gaming PC costs 850 to 2K euro. What is the intended use of such an expensive RAM kit? Is it LLMs again?

        • SaveTheTuaHawk@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Scientific applications. My lab has a PC running 196GB RAM for processing 3D and 4D microscopy voxel datasets.

          • boonhet@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 hours ago

            Assuming that you don’t need the absolute tightest timings and highest speed, you can get 192 GB from Corsair for “just” 660 euros where I live, pretty far still from 2000 euros. The speed and timings are the same as the 1300 euro kit, also from Corsair, it’s just that the cheaper kit has no RGB.

            So at 2k EUR I’m assuming it’s going to be either more than 192 GB (in which case, is that even a desktop motherboard or are we talking about servers?) or some super high speed RAM.

  • tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    1
    ·
    1 day ago

    People buying RAM: oh no, what do we do?

    People buying GPUs: first time?

      • tabular@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Well, I was aware RAM prices fluctuate.

        I’ve never been so unfortunate when buying larger RAM, or building a new system with a new DDR version.

        • givesomefucks@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          They don’t real fluctuate, it’s more oscillating.

          Sometimes it’s “normal” but due to a wide range of issues that can change quickly and stay that way for months because everyone waits till prices go down. So as they go down, people stop waiting,

    • Credibly_Human@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      The thing is, the “people” propping it up, are massive tech companies with collectively trillions of dollars to burn on this thing that is making their stock prices soar.

      They have no incentive as the primary investors doing the circular buying to stop. The big problem here is the stock market provides awful incentives to everyone.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    23 hours ago

    I just got a 2x64GB 6000 kit before its price skyrocketed by like $130. I saw other kits going up, but had no clue I timed it so well.

    …Also, why does “AI” need so much CPU RAM?

    In actual server deployments, pretty much all inference work is done in VRAM (read: HBM/GDDR); they could get by with almost no system RAM. And honestly most businesses are too dumb to train anything that extensively. ASICs that would use, say, LPDDR are super rare, and stuff like Hybrid/IGP inference is the realm of a few random folks with homelabs… Like me.

    I think ‘AI’ might be an overly broad term for general server buildout.

    • Kissaki@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      10 hours ago

      I suspect RAM may become increasingly useful with the shift from pure chat LLM to connected agents, MCP, and catching results and data for scaling things like public Internet search and services.

      When I think of database system server software, a lot of performance gains are from keeping used data in RAM. With the expanding of LLM systems and it’s concerns, backing data, connective ness, and need for optimisation, a shift to caching and keeping in RAM seems to suggest itself. It’s already wasteful/big and operates on a lot of data, so it seems plausible that would not be a small cache.

    • tty5@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      22 hours ago

      Same memory production capacity can be allocated to ddr5 or to hbm and openai signed contracts with sk hynix and samsung, the two largest ram manufacturers in the world, and bought a significant percentage of next year’s production.

      DDR5 prices started spiking as that deals impact propagated through the supply chain. I bought a 2x32 6800 Cl30 kit for 195 euro 12 days ago. It was 330 euro 4 days later.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        22 hours ago

        …Is it that interchangeable?

        TBH I know little of memory fabs and HBM ICs, but I know (say) TSMC can’t just switch from a power-optimized process to a high frequency one at the drop of a hat.

        • tty5@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          21 hours ago

          Slightly different part, same process. The bigger bottleneck is packaging - HBM is 3d stacked.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            21 hours ago

            Ah. Yeah. And its on the fab to do that.

            I always though it’d be cool for CPUs to switch to packaged RAM, too. Samsung apparently tried to do it with Wide I/O for mobile ARM stuff, but it never caught on.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        22 hours ago

        They can ALL be run on RAM, theoretically. I bought 128GB so I can run GLM 4.5 with the experts offloaded to CPU, with a custom trellis/K quant mix; but this is a ‘personal use’ tinkerer setup basically no one but hobbyists will touch.

        Qwen Next is good at that because its very low active parameter.

        …But they aren’t actually deployed that way. They’re basically always deployed on cloud GPU boxes that serve dozens/hundreds of people at once, in parallel.

        AFAIK the only major model actually developed for CPU inference is one of the esoteric Gemma releases, aimed at mobile. And the bitnet experiments, which aren’t very big so far.

        (In case it’s not obvious, this is my special interest, and I’m happy to ramble on about how to set up ‘niche gaming rig hybrid models’ for anyone interested).

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          21 hours ago

          I for one would enjoy triggering your unskippable cutscenes in setting up local CPU based AI if it can work on Linux with an older amd card.

          Don’t have funds for anything fancy, but would be interesting in playing around with it. Been wanting to get something like that setup for home assistant.

          • SabinStargem@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 hours ago

            If you just want an easy way to setup AI on Windows or Linux, KoboldCPP is my recommendation for your backend. It supports the GGUF format, which allows you to use both RAM and VRAM simultaneously. It won’t be the fastest thing, but it is easy enough to setup, with a bundled GUI for prep and actual usage. Through the IP address it gives, you can hook the backend into a frontend of choice.

            KoboldCPP

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            21 hours ago

            Plenty of folks do AMD. A popular homelabsetup is 32GB AMD MI50 GPUs, which are quite cheap on eBay. Even Intel is fine these days!

            But what’s your setup, precisely? CPU, RAM, and GPU.

            • afk_strats@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              21 hours ago

              I have a MI50/7900xtx gaming/ai setup at homr which in i use for learning and to test out different models. Happy to answer questions

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              20 hours ago

              The key is which model, and how.

              For the really sparse MoEs, you might be better off trying ik_llama.cpp, especially if you are targeting a ‘small’ quant. But the dense Gemma models (as good as they are) are probably not the best choice for 8G RAM these days.