How to safely clear space on a full boot drive

I’m putting this in trouble shooting because I’m not sure if there are some housekeeping functions that didn’t run as expected leading to this.

Just came back from 3 weeks away and couldn’t access my Samba shares, tried logging into the gui and got the Rockstor banner, but an error screen where the dashboard should have been. I accessed it via VPN several time while I was gone, so I think the failure is recent. Rebooted from CLI and when I tried to get to the GUI, got an unable to connect error. Did a shutdown from CLI and powered back up while watching the local monitor. Noticed about 6 boot options during the boot process and then boot failed with a disk full option. Back to the CLI to determine what was full, and got this:

[root@jonesville ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 8191912 0 8191912 0% /dev
tmpfs 8202392 0 8202392 0% /dev/shm
tmpfs 8202392 8636 8193756 1% /run
tmpfs 8202392 0 8202392 0% /sys/fs/cgroup
/dev/sda3 25868288 25714676 172 100% /
tmpfs 8202392 4 8202388 1% /tmp
/dev/sda3 25868288 25714676 172 100% /home
/dev/sda1 487634 248524 209414 55% /boot
tmpfs 1640480 0 1640480 0% /run/user/0

[root@jonesville /]# fdisk -l
Disk /dev/sda: 30.0 GB, 30016659456 bytes, 58626288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000ff4
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 6889471 2931712 82 Linux swap / Solaris
/dev/sda3 6889472 58626047 25868288 83 Linux

Tried a Yum Clean All with the following results:
[root@jonesville /]# yum clean all
error: rpmdb: BDB0113 Thread/process 6258/140430441711424 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
Error: rpmdb open failed

Before I try a database recovery I think I need to free up some space and try the clean again. I assume there are some log files I should be able to delete, but I’d like some advice so I don’t delete the wrong files by mistake.
Finally, is there a service that should have been running to prevent this by default, or did I miss a setup step to set one up. Thanks in advance for help, Del

I found a bunch of files in a usr older that I was able to kill off and everything booted and I’m back to normal. Now the lingering question is what filled up and why? Is there a something I should check to prevent a recurance?
Thanks again.

Hi and welcome.

Don’t get that as harsh, but:
layout of you disks / pools / shares please
df is not usefull, please use
btrfs fi df /moun_point_of_fs_in_question
btrfs fi us
btrfs fi show

in terms of cleaning up some space on a btrfs file system you need to understand how FS works in the first place. Deleting an object does not physiacally erase anything - it just adds another entry in filesystem metadata saying that something was deleted. So long story short it can actually decrease available space :slight_smile: To free up space on btrfs FS you need to have deleted something (I’m sure something was deleted already) -> run balance (this will reclaim all free space)

It is convoluted but if penny drops, fact that nothing is ever overwritten can save you a lot of hassle (and data).

@Tomasz_Kusmierz Thank you for the advice, since the problem was on the SDD drive I boot from and not on my data disk I didn’t think to use the btrfs commands. Here is the output of the commands, but I’m not sure how they help me determine what’s filling up and how to prevent it in the future. Any further advice greatly appreciated.
[root@jonesville ~]# btrfs fi df /
Data, single: total=24.39GiB, used=20.22GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=256.00MiB, used=98.67MiB
GlobalReserve, single: total=32.33MiB, used=0.00B

[root@jonesville ~]# btrfs fi us /
Device size: 24.67GiB
Device allocated: 24.67GiB
Device unallocated: 1.00MiB
Device missing: 0.00B
Used: 20.32GiB
Free (estimated): 4.17GiB (min: 4.17GiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 32.33MiB (used: 0.00B)
Data,single: Size:24.39GiB, Used:20.22GiB
/dev/sda3 24.39GiB
Metadata,single: Size:256.00MiB, Used:98.67MiB
/dev/sda3 256.00MiB
System,single: Size:32.00MiB, Used:16.00KiB
/dev/sda3 32.00MiB
/dev/sda3 1.00MiB
[root@jonesville ~]# btrfs fi show /
Label: ‘rockstor_rockstor’ uuid: fc1f40be-7805-4d82-8c17-28907b119917
Total devices 1 FS bytes used 20.32GiB
devid 1 size 24.67GiB used 24.67GiB path /dev/sda3

The files I deleted yesterday reduced the used space from 100% to 84%.

Did you run balance on this FS ? If no you can run it from GUI or from console:
btrfs fi balance start /
As I said just deleting files does give a free space.

Also why your root file system is so busy ? 20GB is a lot … my suggestion is to never use your root FS for anything that system … all share should be somewhere else