Note: I added more info in the OP
geteilt von: https://discuss.tchncs.de/post/52934409
geteilt von: https://discuss.tchncs.de/post/52933193 (OP)
A few days ago I noticed, that my system disc (~120gb) is almost maxed out. Since almost everything that takes up considerable disc space resides on my 2 discs, I started investigating, supported by ChatGPT. Turns out I’ve been running on a writable snapshot that keeps growing with each update. Again, important stuff is on my other discs, so reinstalling Linux allover would be a inconvenience, but no problem. Yet, I’d like to try repairing current installation, if only for the lessons learned.
I let ChatGPT summarize everything as a post so you don’t have to deal with my half-educated gibberish:
<ChatGPT> I’m running openSUSE Tumbleweed with the default Btrfs + Snapper setup. My root partition (~119 GB) is suddenly 98% full, even though it should mostly contain system files.
du -xh --max-depth=1 / only shows ~16 GB used, but df -h reports ~113 GB used. Root, /var, /usr, /home, etc. are all on the same Btrfs filesystem. Snapper is enabled.
I confirmed that Btrfs snapshots are consuming the space, but I’m stuck with a writable snapshot (#835) that is currently mounted, so I can’t delete it from the running system.
To make things worse:
GRUB menu does not appear (Shift/Esc does nothing)
The system still boots into Linux, but I can’t select older snapshots
I tried repairing from an Ubuntu live USB, but:
NVMe device names differ from the installed system
Chroot fails with /bin/bash or /usr/bin/env not found
Likely because /usr is a separate Btrfs subvolume and not mounted
At this point I’m trying to:
Properly mount all Btrfs subvolumes from a live system
Chroot into the installed system
Delete old Snapper snapshots
Reinstall GRUB so the menu works again
If anyone has step-by-step guidance for recovering openSUSE Tumbleweed with Btrfs snapshots and broken GRUB access, I’d really appreciate it. I’m comfortable with the command line but want to avoid making things worse. </ChatGPT>
Hope someone can make something of it and help me fix my system.


I forget the commands but there is something like btrfs scrub or something that will prune or purge unnecessary data. But also is your /home on the same snapshot fs. Like when you do a df -h do you see same root fullness as home fullness? A full home also fills root, if they are same btrfs volume.
I know you are drive mounting into home, I assume after the fact and not part of initial lvm setup. You could try unmounting and sew homes true size compared to root
btrfs scrub it is, but it did not do much, but i followed some clean up tipps and could free 34gb of stuff, so for now I’m good I guess :D
Maybe grab parallel disks usage from https://github.com/KSXGitHub/parallels disks usage
You can run from root and see what’s hogging space