I spend a lot of time in /tmp sending temporary output to files and testing commands when building shell scripts. It’s appropriate that a long-haired fluffer butt lives there because that’s been most of my cats through the years.
Not pictured: /opt, the raccoon
Useless amount of copies of cat.
cp $(which cat) /*/Is it accurate?
Is your server not run by 6 cats?
My ethernet is cat 6.
Can anyone explain to me why it was so important to break the Linux file system?
Like I believe it was since literally every single distribution did it, but I don’t get why it was so important that we had to make things incompatible unless you know what you are doing.
The original reasoning for having all those directories was because some nerds in a university/lab kept running out of HD space and had to keep changing the filing system to spread everything out between an increasing number of drives.
Noobs should’ve just used zfs
/home because you want to save the user files if you need to reinstall.
/var and /tmp because /var holds log files and /tmp temporary files that can easily take up all your disk space. So it’s best to just fill up a separate partition to prevent your system from locking up because your disk is full.
/usr and /bin… this I don’t know
I would think putting /bin and /lib on the fastest thing possible would be nice 🤷
Could you not just use subdirectories?
They are subdirectories?!
Ok technically but why couldn’t we keep a stable explicit hierarchy without breaking compatibility or relying on symlinks and assumption?
In other words
Why not /system/bin, /system/lib, /apps/bin.
Or why not keep /bin as a real directory forever
Or why force /usr to be mandatory so early?
Because someone in the 1970s-80s (who is smarter than we are) decided that single-user mode files should be located in the root and multi-user files should be located in
/usr. Somebody else (who is also smarter than we are) decided that it was a stupid ass solution because most of those files are identical and it’s easier to just symlink them to the multi-user directories (because nobody runs single-user systems anymore) than making sure that every search path contains the correct versions of the files, while also preserving backwards compatibility with systems that expect to run in single-user mode. Some distros, like Debian, also have separate executables for unprivileged sessions (/binand/usr/bin) and privileged sessions (i.e. root,/sbinand/usr/sbin). Other distros, like Arch, symlink all of those directories to/usr/binto preserve compatibility with programs that refer to executables using full paths.But for most of us young whippersnappers, the most important reason is that it’s always been done like this, and changing it now would make a lot of developers and admins very unhappy, and lots of software very broken.
The only thing better than perfect is standardized.
Who puts /etc on a separate drive?




