• sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    128
    arrow-down
    5
    ·
    6 months ago

    Well yeah, they’re enough to meet the minimum use cases so they can upsell most people on expensive RAM upgrades.

    That’s why I don’t buy laptops with soldered RAM. That’s getting harder and harder these days, but my needs for a laptop have also gone down. If they solder RAM, there’s nothing you can (realistically) do if you need more, so you’ll pay extra when buying so they can upcharge a lot. If it’s not soldered, you have a decent option to buy RAM afterward, so there’s less value in upselling too much.

    So screw you Apple, I’m not buying your products until they’re more repair friendly.

    • akilou@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      6 months ago

      I had a extra stick of RAM available the other day so I went to open my wife’s Lenovo to see if it’d take it and the damn thing is screwed shut with the smallest torx screws I’ve ever seen, smaller than what I have. I was so annoyed

      • SpaceNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        2
        ·
        edit-2
        6 months ago

        The real question is why you don’t have a complete precision screwdriver set.

        • akilou@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          6 months ago

          I thought I did! Until I got the smallest one out and it just spun on top of the screw

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        6 months ago

        I bought the E495 because the T495 had soldered RAM and one RAM slot, while the E495 had both RAM slots replacable. Adding more RAM didn’t need any special tools. Newer E-series and T-series both have one RAM slot and some soldered RAM. I’m guessing you’re talking about one of the consumer lines, like the Yoga series or something?

        That said, Lenovo (well, Motorola in this case, but Lenovo owns Motorola) puts all kinds of restrictions to your rights if you unlock the bootloader of their phones (PDF version of the agreement). That, plus going down the path of soldering RAM gives me serious concerns about the direction they’re heading, so I can’t really recommend their products anymore.

        If I ever need a new laptop, I’ll probably get a Framework.

    • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 months ago

      That’s why I don’t buy laptops with soldered RAM.

      Oh, that shit is soldered on…
      I mean, I did see that on some laptops, but only those cheap things in €150 range (new) which even use eMMC for storage.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      6 months ago

      These days I don’t realistically expect my RAM requirements to change over the lifetime of the product. And I’m keeping computers longer than ever: 6+ years where it used to be 1 or 2.

      People have argued millions of times on the internet that Apple’s products don’t meet people’s needs and are massively overpriced. Meanwhile they just keep selling like crazy and people love them. I think the issue comes from having pricing expectations set over the in race-to-the-bottom world of commoditized Windows/Android trash.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        6 months ago

        I upgraded my personal laptop a year or so after I got it (started with 8GB, which was fine until I did Docker stuff), and I’m probably going to upgrade my desktop soon (16GB, which has been fine for a few years, but I’m finally running out). My main complaint about my work laptop is RAM (16GB I think; I’d love another 8-16GB), but I cannot upgrade it because it’s soldered, so I have to wait for our normal cycle (4 years; will happen next year). I upgraded my NAS RAM when I upgraded a different PC as well.

        I don’t do it very often, but I usually buy what I need when I build/buy the machine and upgrade 3-4 years later. I also often upgrade the CPU before doing a motherboard upgrade, as well as the GPU.

        Meanwhile they just keep selling like crazy and people love them. I think the issue comes from having pricing expectations set over the in race-to-the-bottom world of commoditized Windows/Android trash.

        I might agree if Apple hardware was actually better than alternatives, but that’s just not the case. Look at Louis Rossmann’s videos, where he routinely goes over common failure cases that are largely due to design defects (e.g. display cable being cut, CPU getting fried due to a common board short, butterfly keyboard issues, etc). As in, defects other laptops in a similar price bracket don’t have.

        I’ve had my E-series ThinkPad for 6 years, with no issues whatsoever. The USB-C charge port is getting a little loose, but that’s understandable since it’s been mostly a kids Minecraft device for a couple years now, and kids are hard on computers. I had my T-Mobile series before that for 5-ish years until it finally died due to water damage (a lot of water).

        Apple products (at least laptops) are designed for aesthetics first, not longevity. They do generally have pretty good performance though, especially with the new Apple Silicon chips, but they source a lot of their other parts from the same companies that provide parts for the rest of the PC market.

        If you stick to the more premium devices, you probably won’t have issues. Buy business class laptops and phones with long software support cycles. For desktops, I recommend buying higher end components (Gold or Platinum power supply, mid-range or better motherboard, etc), or buying from a local DIY shop with a good warranty if buying pre built.

        Like anything else, don’t buy the cheapest crap you can, buy something in the middle of the price range for the features you’re looking for.

    • BorgDrone@lemmy.one
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      21
      ·
      6 months ago

      That’s why I don’t buy laptops with soldered RAM.

      In my opinion disadvantages of user-replaceable RAM far outweigh the advantages. The same goes for discrete GPUs. Apple moved away from this and I expect PC manufacturers to follow Apple’a move in the next decade or so, as they always do.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        4
        ·
        6 months ago

        Here’s how I see the advantages of soldered RAM:

        • better performance
        • less risk of physical damage
        • more energy efficient
        • smaller

        The risk of physical damage is so incredibly low already, and energy use of RAM is also incredibly low, so neither of those seem important.

        So that leaves performance, which I honestly haven’t found good numbers for. If you have this, I’m very interested, but since RAM speed is rarely the bottleneck in a computer (unless you have specific workloads), I’m going to assume it to be a marginal improvement.

        So really, I guess “smaller” is the best argument, and I honestly don’t care about another half centimeter of space, it’s really not an issue.

        • BorgDrone@lemmy.one
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          6 months ago

          So that leaves performance, which I honestly haven’t found good numbers for. If you have this, I’m very interested, but since RAM speed is rarely the bottleneck in a computer (unless you have specific workloads), I’m going to assume it to be a marginal improvement.

          This is where you’re mistaken. There is one thing that integrated RAM enables that makes a huge difference for performance: unified memory. GPUs code is almost always bandwidth limited, which why on a graphics card the RAM is soldered on and physically close to the GPU itself, because that is needed for the high bandwidth requirements of a GPU.

          By having everything in one package, CPU and GPU can share the same memory, which means that you eliminate any overhead of copying data to/from VRAM for GPGPU tasks. But there’s more than that, unified memory doesn’t just apply to the CPU and GPU, but also other accelerators that are part of the SoC. What is becoming increasingly important is AI acceleration. UMA means the neural engine can access the same memory as the CPU and GPU, and also with zero overhead.

          This is why user-replaceable RAM and discrete GPUs are going to die out. The overhead and latency of copying all that data back and forth over the relatively slow PCIe bus is just not worth it.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            edit-2
            6 months ago

            Do you have actual numbers to back that up?

            The best I’ve found is benchmarks of Apple silicon vs Intel+dGPU, but that’s an apples to oranges comparison. And if I’m not mistaken, Apple made other changes like a larger bus to the memory chips, which again makes comparisons difficult.

            I’ve heard about potential benefits, but without something tangible, I’m going to have to assume it’s not the main driver here. If the difference is significant, we’d see more servers and workstations running soldered RAM, but AFAIK that’s just not a thing.

            • BorgDrone@lemmy.one
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              6 months ago

              The best I’ve found is benchmarks of Apple silicon vs Intel+dGPU, but that’s an apples to oranges comparison.

              The thing with benchmarks is that they only show you the performance of the type of workload the benchmark is trying to emulate. That’s not very useful in this case. Current PC software is not build with this kind of architecture in mind so it was never designed to take advantage of it. In fact, it’s the exact opposite: since transferring data to/from VRAM is a huge bottleneck, software will be designed to avoid it as much as possible.

              For example: a GPU is extremely good at performing an identical operation on lots of data in parallel. The GPU can perform such an operation much, much faster than the CPU. However, copying the data to VRAM and back may add so much additional time that it still takes less time to run it on the CPU, a developer may then choose to run it on the CPU instead even if the GPU was specifically designed to handle that kind of work. On a system with UMA you would absolutely run this on the GPU.

              The same thing goes for something like AI accelerators. What PC software exists that takes advantage of such a thing?

              A good example of what happens if you design software around this kind of architecture can be found here. This is a post by a developer who worked on Affinity Photo. When they designed this software they anticipated that hardware would move towards a unified memory architecture and designed their software based on that assumption.

              When they finally got their hands on UMA hardware in the form of an M1 Max that laptop chip beat the crap out of a $6000 W6900X.

              We’re starting to see software taking advantage of these things on macOS, but the PC world still has some catching up to do. The hardware isn’t there yet, and the software always lags behind the hardware.

              I’ve heard about potential benefits, but without something tangible, I’m going to have to assume it’s not the main driver here. If the difference is significant, we’d see more servers and workstations running soldered RAM, but AFAIK that’s just not a thing.

              It’s coming, but Apple is ahead of the game by several years. The problem is that in the PC world no one has a good answer to this yet.

              Nvidia makes big, hot, power hungry discrete GPUs. They don’t have an x86 core and Windows on ARM is a joke at this point. I expect them to focus on the server-side with custom high-end AI processors and slowly move out of the desktop space.

              AMD has the best papers for desktop. They have a decent x86 core and GPU, they already make APUs. Intel is trying to get into the GPU game but has some catching up to do.

              Apple has been quietly working towards this for years. They have their UMA architecture in place, they are starting to put some serious effort into GPU performance and rumor has it that with M4 they will make some big steps in AI acceleration as well. The PC world is held back by a lot of legacy hard and software, but there will be a point where they will have to catch up or be left in the dust.

          • __dev@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            6 months ago

            “unified memory” is an Apple marketing term for what everyone’s been doing for well over a decade. Every single integrated GPU in existence shares memory between the CPU and GPU; that’s how they work. It has nothing to do with soldering the RAM.

            You’re right about the bandwidth though, current socketed RAM standards have severe bandwidth limitations which directly limit the performance of integrated GPUs. This again has little to do with being socketed though: LPCAMM supports up to 9.6GT/s, considerably faster than what ships with the latest macs.

            This is why user-replaceable RAM and discrete GPUs are going to die out. The overhead and latency of copying all that data back and forth over the relatively slow PCIe bus is just not worth it.

            The only way discrete GPUs can possibly be outcompeted is if DDR starts competing with GDDR and/or HBM in terms of bandwidth, and there’s zero indication of that ever happening. Apple needs to puts a whole 128GB of LPDDR in their system to be comparable (in bandwidth) to literally 10 year old dedicated GPUs - the 780ti had over 300GB/s of memory bandwidth with a measly 3GB of capacity. DDR is simply not a good choice GPUs.

            • BorgDrone@lemmy.one
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              6 months ago

              “unified memory” is an Apple marketing term for what everyone’s been doing for well over a decade.

              Wrong. Unified memory (UMA) is not an Apple marketing term, it’s a description of a computer architecture that has been in use since at least the 1970’s. For example, game consoles have always used UMA.

              Every single integrated GPU in existence shares memory between the CPU and GPU; that’s how they work.

              Again, wrong.

              While iGPUs have existed for PCs for a long time, they did not use a unified memory architecture. What they did was reserve a portion of the system RAM for the GPU. For example on a PC with 512MB RAM and an iGPU, 64MB may have been reserved for the GPU. The CPU then had access to 512-64 = 448MB. While they shared the same physical memory chips, they both had a separate address space. If you wanted to make a texture available to the GPU, it still had to be copied to the special reserved RAM space for the GPU and the CPU could not access that directly.

              With unified memory, both CPU and GPU share the same address space. Both can access the entire memory. No RAM is reserved purely for the GPU. If you want to make something available to the GPU, nothing needs to be copied, you just need to point to where it is in RAM. Likewise, anything done by the GPU is immediately accessible by the CPU.

              Since there is one memory pool for both, you can use RAM more efficiently. If you have a discrete GPU with 16GB VRAM, and your app only needs 8GB VRAM, that other memory just sits there being useless. Alternatively, if your app needs 24GB VRAM, you can’t run it because your GPU only has 16B, even if you have lots of system RAM available.

              With UMA you can use all the RAM you have for whatever you need it for. On an M2 Ultra with 192GB RAM you can use almost all of that for the GPU (minus a little bit that’s used for the OS and any running apps). Even on a tricked out PC with a 4090 you can’t run anything that needs more than 24GB VRAM. Want to run something where the GPU needs 180MB of memory? No problem on an M1 Ultra.

              It has nothing to do with soldering the RAM.

              It has everything to do with soldering the RAM. One of the reason iGPUs sucked, other than not using UMA, is that GPUs performance is almost limited by memory bandwidth. Compared to VRAM, standard system RAM has much, much less bandwidth causing iGPUs to be slow.

              A high-bandwidth memory bus, like a GPU needs, has a lot of connections and runs at high speeds. The only way to do this reliably is to physically place the RAM very close to the actual GPU. Why do you think GPUs do not have user-upgradable RAM?

              Soldering the RAM makes it possible to integrate a CPU and an non-sucking GPU. Go look at the inside of a PS5 or XSX and you’ll see the same thing: an APU with the RAM chips soldered to the board very close to it.

              This again has little to do with being socketed though: LPCAMM supports up to 9.6GT/s, considerably faster than what ships with the latest macs.

              LPCAMM is a very recent innovation. Engineering samples weren’t available until late last year and the first products will only hit the market later this year. Maybe this will allow for Macs with user-upgradable RAM in the future.

              The only way discrete GPUs can possibly be outcompeted is if DDR starts competing with GDDR and/or HBM in terms of bandwidth

              What use is high bandwidth memory if it’s a discrete memory pool with only a super slow PCIe bus to access it?

              Discrete VRAM is only really useful for gaming, where you can upload all the assets to VRAM in advance and data practically only flows from CPU to GPU and very little in the opposite direction. Games don’t matter to the majority of users. GPGPU is much more interesting to the general public.

              • __dev@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 months ago

                Wrong. Unified memory (UMA) is not an Apple marketing term, it’s a description of a computer architecture that has been in use since at least the 1970’s. For example, game consoles have always used UMA.

                Apologies, my google-fu seems to have failed me. Search results are filled with only apple-related results, but I was now able to find stuff from well before. Though nothing older than the 1990s.

                While iGPUs have existed for PCs for a long time, they did not use a unified memory architecture.

                Do you have an example, because every single one I look up has at least optional UMA support. The reserved RAM was a thing but it wasn’t the entire memory of the GPU instead being reserved for the framebuffer. AFAIK iGPUs have always shared memory like they do today.

                It has everything to do with soldering the RAM. One of the reason iGPUs sucked, other than not using UMA, is that GPUs performance is almost limited by memory bandwidth. Compared to VRAM, standard system RAM has much, much less bandwidth causing iGPUs to be slow.

                I don’t disagree, I think we were talking past each other here.

                LPCAMM is a very recent innovation. Engineering samples weren’t available until late last year and the first products will only hit the market later this year. Maybe this will allow for Macs with user-upgradable RAM in the future.

                Here’s a link to buy some from Dell: https://www.dell.com/en-us/shop/dell-camm-memory-upgrade-128-gb-ddr5-3600-mt-s-not-interchangeable-with-sodimm/apd/370-ahfr/memory. Here’s the laptop it ships in: https://www.dell.com/en-au/shop/workstations/precision-7670-workstation/spd/precision-16-7670-laptop. Available since late 2022.

                What use is high bandwidth memory if it’s a discrete memory pool with only a super slow PCIe bus to access it?

                Discrete VRAM is only really useful for gaming, where you can upload all the assets to VRAM in advance and data practically only flows from CPU to GPU and very little in the opposite direction. Games don’t matter to the majority of users. GPGPU is much more interesting to the general public.

                gestures broadly at every current use of dedicated GPUs. Most of the newfangled AI stuff runs on Nvidia DGX servers, which use dedicated GPUs. Games are a big enough industry for dGPUs to exist in the first place.

        • BorgDrone@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          6 months ago

          User replaceable RAM is slow, which means you can’t integrate the CPU and GPU in one package. This means a GPU with it’s own RAM, which has huge disadvantages.

          Even a 4090 only has 24GB and slow transfers to/from VRAM. The GPU can only operate on data in VRAM, so anything you need it to work on you need to copy over the relatively slow PCIe bus to the GPU. Then once it’s done you need to copy the results back over the PCIe bus to system RAM for the CPU to be able to access it. This considerably slows down GPGPU tasks.

          • Dojan@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            Ah yeah, I see. That’s definitely a downside if you work with something where that becomes a factor.