So I physically replaced a broken drive and it didn’t quite work as expected, I encountered a few interface issues :
So now, I still have my data but no redundancy, rockstor can’t add my new drive, when I try adding disk to my pool I get :
Pool migration from single to single is not supported.
I can add it from the shell though (sda) :
[root@rockstor ~]# btrfs filesystem show
Label: ‘rockstor_rockstor’ uuid: f9674e49-fcb2-480f-a012-490cea875e6a
Total devices 1 FS bytes used 3.47GiB
devid 1 size 26.35GiB used 5.52GiB path /dev/sdc3
Label: ‘raid1’ uuid: b7ed32b3-43a3-423e-82e7-a9bc60676e2f
Total devices 4 FS bytes used 1.56TiB
devid 2 size 3.64TiB used 1.57TiB path /dev/sdb
devid 3 size 931.51GiB used 58.01GiB path /dev/sdd
devid 4 size 3.64TiB used 0.00B path /dev/sda
*** Some devices missing
But I can’t remove the old phantom one :
[root@rockstor ~]# btrfs device delete missing /mnt2/data
fails by going background and never stops :
[root@rockstor ~]# ps ax|grep btrfs|grep missing
13803 pts/1 D+ 0:45 btrfs device delete missing /mnt2/data
I feel stuck, how can I get rockstor interface to see the changes ? and how can I remove this phantom disk ?
any hint appreciated
Sorry for the delay in my response and thanks for posting details. I have a few questions.
What’s the kernel version(uname -r) and btrfs-progs version(btrfs --version)? Just want to confirm your system is up to date.
Judging by the output of
btrfs fi show raid1, it seems like the data is definitely not fully balanced(even between devids 2 and 3). What is the raid profile? Can you share the output of
btrfs fi df /mnt2/raid1 (assuming the pool name is raid1)?
delete missing doesn’t work if btrfs thinks the redundancy of data is not acceptable. We may have to balance first. Please provide answers for my above questions and we can go from there.
Thanks for your help.
[root@rockstor ~]# uname -r
[root@rockstor ~]# btrfs --version
[root@rockstor ~]# btrfs fi df /mnt2/raid1
Data, RAID1: total=1.56TiB, used=1.56TiB
System, RAID1: total=8.00MiB, used=256.00KiB
Metadata, RAID1: total=6.00GiB, used=4.38GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
Thanks. Here’s why I think
delete missing is refusing to delete the phantom. Though the Pool is RAID1, not all data is distributed per RAID1. Can you kick off a balance?
You can do this from the Web-UI or with the following command:
btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt2/raid1 Since you have 1.5TB of data, this could take a while.
You can query the status on the Web-UI or with this command
btrfs balance status /mnt2/raid1.
As balance redistributes data, you’ll see changes in
btrfs fi show raid1 where “used” numbers should normalize. Once the balance is finished, you can try deleting the phantom again. Hope it all works out, keep us posted.
You were right, after the balance, the delete missing worked.
My data share is back online, that’s good
Some inconsistencies remain though :
The dashboard took some time to show the true data size but it did (1.5TB)
but on the storage/pools page rockstor says I use 0% of my raid1 data pool :
On the pools page it says “Share size enforcement is temporarily disabled due to incomplete support in
BTRFS. Until this status changes, the effective size of a Share is equal to the
size of the Pool it belongs to.” but after I replaced a failing disk with a bigger one the share size remained the same (3TB instead oh 4TB).
The resize share page doesn’t tell me the maximum size I can set. I’m not sure I could resize it though the page indicates : Share size enforcement is temporarily disabled due to incomplete support in BTRFS. Until this status changes, the effective size of a Share is equal to the size of the Pool it belongs to.
Thanks for your support.
I am glad it worked @Fred_Lemasson!
The Pool size shows 4TB which is the size any Share can race up to within that Pool. Currently there is not size enforcement at the Share level as indicated by the message on the UI. I am not willing to put workaround for size enforcement and reporting, instead, my guess and hope is that things should be mostly fixed in BTRFS itself in 4.2.x or 4.3.
@suman have you got an upstream tracker bug for the quota feature?