Note: I added more info in the OP

geteilt von: https://discuss.tchncs.de/post/52934409

geteilt von: https://discuss.tchncs.de/post/52933193 (OP)

A few days ago I noticed, that my system disc (~120gb) is almost maxed out. Since almost everything that takes up considerable disc space resides on my 2 discs, I started investigating, supported by ChatGPT. Turns out I’ve been running on a writable snapshot that keeps growing with each update. Again, important stuff is on my other discs, so reinstalling Linux allover would be a inconvenience, but no problem. Yet, I’d like to try repairing current installation, if only for the lessons learned.

I let ChatGPT summarize everything as a post so you don’t have to deal with my half-educated gibberish:

<ChatGPT> I’m running openSUSE Tumbleweed with the default Btrfs + Snapper setup. My root partition (~119 GB) is suddenly 98% full, even though it should mostly contain system files.

du -xh --max-depth=1 / only shows ~16 GB used, but df -h reports ~113 GB used. Root, /var, /usr, /home, etc. are all on the same Btrfs filesystem. Snapper is enabled.

I confirmed that Btrfs snapshots are consuming the space, but I’m stuck with a writable snapshot (#835) that is currently mounted, so I can’t delete it from the running system.

To make things worse:

GRUB menu does not appear (Shift/Esc does nothing)

The system still boots into Linux, but I can’t select older snapshots

I tried repairing from an Ubuntu live USB, but:

NVMe device names differ from the installed system

Chroot fails with /bin/bash or /usr/bin/env not found

Likely because /usr is a separate Btrfs subvolume and not mounted

At this point I’m trying to:

Properly mount all Btrfs subvolumes from a live system

Chroot into the installed system

Delete old Snapper snapshots

Reinstall GRUB so the menu works again

If anyone has step-by-step guidance for recovering openSUSE Tumbleweed with Btrfs snapshots and broken GRUB access, I’d really appreciate it. I’m comfortable with the command line but want to avoid making things worse. </ChatGPT>

Hope someone can make something of it and help me fix my system.

  • xtapa@discuss.tchncs.deOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 hours ago

    If your home partition part of the same filesystem then filling your home with games and media also shows as filling root also, because its all one volume.

    I have ~/Misc and ~/Games mounted to two different drives and when I last checked (has been a while tbh) these drives would not affect the display for root. I remember having an eye on it in the beginning because I wasn’t sure I hooked up the drives correctly.

    If you don’t like CLI stuff then: You can review grub menu options in the YAST2 GUI app for boot. Here you will see if there is a delay set to pick an alternate snapshot, or boot direct.

    I tried updating GRUB delay via the config file but could not apply the changes. I didn’t even think about YAST. What a good call! I’ll try this.

    But first use YAST2 GUI to review Filesystem, it will show you how many snap shots you have and whether they are important or not. You can also set how long to keep them by time or by number of entries. You can delete old ones if you don’t need them. You can force a new snapshot. But I would clear disk space first if you are that full.

    I forgot to add my snapper list output in the inital post and added it later. I have only 8 snapshots as older snapshots get deleted automatically. There is “#0 current”, “#835” from back in 2024 and 6 recent snapshots. Each a snap before an update and after an update. From what I understand, my system keeps booting in the old #835 snapshot (I guess I did something wrong during a rollback) that now keeps growing after every system update.

    How have your " sudo zypper dup" upgrades gone?

    For the last 2 years they went through without noticable issues.

    • BCsven@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      18 hours ago

      The system will always boot to a writable snapshot, unless you chose an old one (which becomes read only) If you have the 835 as a single snapshot that is probably something you made a single snapshot of maybe? as they are usually in pairs.

      Unless something went really odd.

      Lookup tools to purge old btrfs data, but first try manually making a new snapshot. And see if it will boot to the new one on its own. If it can you can probably delete the old snapshot pairs. The singles are often a large install like original install that served as the basis for subsequent incremental changes. There could be a lot of old data it is hanging onto.

      Also look at the btrfs scripts to see when they’d run, you can run the commands manually to try to cleanup and purge data

      • xtapa@discuss.tchncs.deOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 hours ago

        I managed to get into grub after editing the config in yast. I booted into a current snapshot and deleted most old ones, including the old 835.

        Still, my disc usage is at 118 / 120gb. I cleaned up all zypper stuff that is orphaned and unneeded.

        My snapper list output is now

        0 │ single │ root │ current 1828 │ pre │ Fr 16 Jan 2026 17:05:39 CET │ root │ 1.76 GiB │ number │ yast snapper 1832* │ single │ Fr 16 Jan 2026 17:17:41 CET │ root │ 272.00 KiB │ writable copy of #1807

        And some snapshot from all the cleanup. Looks like the #835 entry before and I kinda get the feeling ChatGPT led to wrong conclusions.

        So now I need to find out why my disk space is still fucked :( Starting with btrfs cleanup I guess. Do you have any suggestions?

        • BCsven@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          16 hours ago

          I forget the commands but there is something like btrfs scrub or something that will prune or purge unnecessary data. But also is your /home on the same snapshot fs. Like when you do a df -h do you see same root fullness as home fullness? A full home also fills root, if they are same btrfs volume.

          I know you are drive mounting into home, I assume after the fact and not part of initial lvm setup. You could try unmounting and sew homes true size compared to root