I am new to Rockstor and have installed it using a single drive for my root filesystem and a collection of 7 other drives as a data pool (RAID 1).
I intend to use this server as a backup server (via SFTP). Since this server will replace an existing SFTP server, I began transferring data using rsync from the other server. This transfer stalled part-way through the transfer, and after multiple attempts that also stalled, I decided to reboot the Rockstor server.
After reboot, I could not connect to the web UI but was able to log in via SSH. I discovered that my root filesystem is full:
[root@backup1 Backup]# btrfs fi df / Data, single: total=144.54GiB, used=144.53GiB System, DUP: total=8.00MiB, used=16.00KiB Metadata, DUP: total=1.00GiB, used=321.53MiB GlobalReserve, single: total=180.23MiB, used=0.00B [root@backup1 Backup]# btrfs fi us / Overall: Device size: 146.56GiB Device allocated: 146.56GiB Device unallocated: 4.00MiB Device missing: 0.00B Used: 145.16GiB Free (estimated): 7.67MiB (min: 7.67MiB) Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 180.23MiB (used: 0.00B) Data,single: Size:144.54GiB, Used:144.53GiB /dev/sda3 144.54GiB Metadata,DUP: Size:1.00GiB, Used:321.53MiB /dev/sda3 2.00GiB System,DUP: Size:8.00MiB, Used:16.00KiB /dev/sda3 16.00MiB Unallocated: /dev/sda3 4.00MiB
This despite the fact that I was transferring files only to the data pool (not the root filesystem).
Using “du” I cannot find any files that account for anything close to the full space of the root filesystem, so I am assuming this means the root filesystem has somehow become corrupted.
- Are there known issues with Rockstor/btrfs that can result in a full filesystem such as I described?
- Is there any way to diagnose more fully why BTRFS believes the filesystem is full when I cannot find any files that take up space anywhere close to the size of the filesystem?
- Is there a way to recover my root filesystem, or is it best to re-install Rockstor and hope this doesn’t re-occur?