Note: I added more info in the OP

geteilt von: https://discuss.tchncs.de/post/52934409

geteilt von: https://discuss.tchncs.de/post/52933193 (OP)

A few days ago I noticed, that my system disc (~120gb) is almost maxed out. Since almost everything that takes up considerable disc space resides on my 2 discs, I started investigating, supported by ChatGPT. Turns out I’ve been running on a writable snapshot that keeps growing with each update. Again, important stuff is on my other discs, so reinstalling Linux allover would be a inconvenience, but no problem. Yet, I’d like to try repairing current installation, if only for the lessons learned.

I let ChatGPT summarize everything as a post so you don’t have to deal with my half-educated gibberish:

<ChatGPT> I’m running openSUSE Tumbleweed with the default Btrfs + Snapper setup. My root partition (~119 GB) is suddenly 98% full, even though it should mostly contain system files.

du -xh --max-depth=1 / only shows ~16 GB used, but df -h reports ~113 GB used. Root, /var, /usr, /home, etc. are all on the same Btrfs filesystem. Snapper is enabled.

I confirmed that Btrfs snapshots are consuming the space, but I’m stuck with a writable snapshot (#835) that is currently mounted, so I can’t delete it from the running system.

To make things worse:

GRUB menu does not appear (Shift/Esc does nothing)

The system still boots into Linux, but I can’t select older snapshots

I tried repairing from an Ubuntu live USB, but:

NVMe device names differ from the installed system

Chroot fails with /bin/bash or /usr/bin/env not found

Likely because /usr is a separate Btrfs subvolume and not mounted

At this point I’m trying to:

Properly mount all Btrfs subvolumes from a live system

Chroot into the installed system

Delete old Snapper snapshots

Reinstall GRUB so the menu works again

If anyone has step-by-step guidance for recovering openSUSE Tumbleweed with Btrfs snapshots and broken GRUB access, I’d really appreciate it. I’m comfortable with the command line but want to avoid making things worse. </ChatGPT>

Hope someone can make something of it and help me fix my system.

  • xtapa@discuss.tchncs.deOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 hours ago

    I managed to get into grub after editing the config in yast. I booted into a current snapshot and deleted most old ones, including the old 835.

    Still, my disc usage is at 118 / 120gb. I cleaned up all zypper stuff that is orphaned and unneeded.

    My snapper list output is now

    0 │ single │ root │ current 1828 │ pre │ Fr 16 Jan 2026 17:05:39 CET │ root │ 1.76 GiB │ number │ yast snapper 1832* │ single │ Fr 16 Jan 2026 17:17:41 CET │ root │ 272.00 KiB │ writable copy of #1807

    And some snapshot from all the cleanup. Looks like the #835 entry before and I kinda get the feeling ChatGPT led to wrong conclusions.

    So now I need to find out why my disk space is still fucked :( Starting with btrfs cleanup I guess. Do you have any suggestions?

    • BCsven@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      16 hours ago

      I forget the commands but there is something like btrfs scrub or something that will prune or purge unnecessary data. But also is your /home on the same snapshot fs. Like when you do a df -h do you see same root fullness as home fullness? A full home also fills root, if they are same btrfs volume.

      I know you are drive mounting into home, I assume after the fact and not part of initial lvm setup. You could try unmounting and sew homes true size compared to root