Pool resizing issues

I setup Rockstor with a two-disk mirror. I just went to “resize” (maybe should be labelled “modify”?) the pool to add two more disks.

I first tried to do so without changing the RAID config, but got an error about this being unsupported.

So I went back to say “yes” to reconfiguring the RAID level. But it gave me no option on the reconfig, and proceeded without any direction.

Checking the balance log shows this error:

Error running a command. cmd = ['btrfs', 'balance', 'start', u'-mconvert=', u'-dconvert=', u'/mnt2/documents']. rc = 1. stdout = ['']. stderr = ['the convert option requires an argument', '']

When I got back to the “pools” view, the “RAID” column is blank.

Here’s similar if I go to the remove drives wizard.

Seems data is all gone as well.

Correction: I can see data data is there from command line.

just tried it myself, for me everything seems to work, the wizard looked like this:

also it might be possible that the gui somewhat mixed things up during your attempts to add a disk (going back in the wizard and selecting another option leads to the same selection as the previous, seems like a bug)

this being said im able to do this, which should not be possible to do:

the command internally issued by such a configuration in the web ui might be the one you experienced the error message with.

what is the output of btrfs fi show ?

I do think part of this is the GUI just getting confused. But in answer to your question:

Label: 'documents'  uuid: eedeccac-104b-434b-acf3-01249fec3bb2
    Total devices 4 FS bytes used 205.35GiB
    devid    1 size 931.51GiB used 0.00B path /dev/sda
    devid    2 size 931.51GiB used 0.00B path /dev/sde
    devid    3 size 2.73TiB used 207.03GiB path /dev/sdb
    devid    4 size 2.73TiB used 207.03GiB path /dev/sdc

what is your raid type?
it seems you used raid1 initially

the disks sda and sde have been added?

you should start a balance if you use anything different than single as currently your data is on sdb and sdc only
if the gui is still mixed up try starting a new resize and select “change raid level” and set to the desired setting but dont add any drives

I did run a balance, which finished, and resulted in all data being moved from the initial two drives, to the newly-added ones.

Obviously something went wrong, that shouldn’t.

But, changed to RAID10, and rebalanced, and things look as expected now. Thanks for the help!

Label: 'documents'  uuid: eedeccac-104b-434b-acf3-01249fec3bb2
    Total devices 4 FS bytes used 205.38GiB
    devid    1 size 931.51GiB used 206.53GiB path /dev/sda
    devid    2 size 931.51GiB used 206.53GiB path /dev/sde
    devid    3 size 2.73TiB used 206.53GiB path /dev/sdb
    devid    4 size 2.73TiB used 206.53GiB path /dev/sdc

nice!

@suman you might want to check that web ui bug regarding the selection when clicking “resize” on the pool page

@felixbrucker Thanks for helping out, way to go!

There is some issue if the user goes back and forth on the wizard screens before hitting the resize button. Is that what may have happened @bdarcus? Are you all set now?

There’s an open pull request pending testing on this by @mchakravartula.

[quote=“suman, post:8, topic:454”]
There is some issue if the user goes back and forth on the wizard screens before hitting the resize button. Is that what may have happened @bdarcus?[/quote]

Am not sure. I don’t recall doing so, but it’s possible.

Yes, thanks.

Actually, go back to my first post, and reproduce. I want to keep my same RAID1 config, but add two drives. I go to do that, but system won’t let me, etc.

Here’s another bug. I want to remove two drives from the four drive RAID10. I specify those two drives, so that the end result will be two drives in the pool. Hence, RAID 10 will not be possible. But Rockstor assumes RAID 10 will be the result.

PS - notwithstanding this, what’s the best way to just replace two drives?

That’s more of a feature than a bug at this point :smile: this is why.

If you click resize with that config, you will get the error you expect about 4 disk minimum for raid10. It’s just that the validation is handled on the server side and you need to click resize to trigger it.

At this point I don’t want to duplicate validation both on the frontend and backbend. But we will in the future as btrfs behavior settles down and our understanding gets better.

To replace two drives you need to add 2, and then remove the other two after the first resize finishes. So two resize operations in total. Perhaps we need to support add and remove together to save on balance times. But again this is one of those operations with many variables and uncertainty and needs to be tested and understood better.

OK, that’s reasonable.

But the way the UI is setup leaves this user uncertain about that. So you may want to note that in the “The following changes …” text? Maybe “The following changes will be applied to the pool documents. Note that (currently) [insert whatever description]”

1 Like

I don’t have enough SATA ports on this (ECS) motherboard to add 2.

Maybe I could put one on a USB port?

Oh yes, that should work.

So I’m trying you’re suggestion on the replacement @suman but running into bugs.

I added the two drives successfully, and balance finished correctly.

But when deleting the two other drives just now, when I confirmed what I wanted, I got this:

Note how the “resize” item is grayed out. I got some error that I couldn’t see before it went away.

Now the odd thing: it seems the action is being processed, as it brtfs fi show suggested there’s rebalancing happening.

The button is disabled/grayed-out during the time it takes the server to respond. It usually happens fast enough that it’s not noticeable, but in this case, looks like server not only took time, but also threw an error. A screenshot of the error would have been helpful, but logs should have useful info.

What does that output of btrfs fi show look like?

Could you also share the screenshot of the Storage -> Disks screen? Also, if you think btrfs balance finished successfully, but still see that the UI attributes sde and sda to the pool, could you click on the rescan button on the Disks’s screen so see if that updates the UI?

Thank you for patiently providing detailed information. I am finishing up on a different issue, but I’ll thoroughly test all resize code paths right after that.

Everything is now fine, as in output of btrfs fi show and the disks UI is consistent and as expected.

I just needed to wait for the rebalance to finish, rescan, and delete two “duplicate” drives. Now I see:

Label: 'documents'  uuid: eedeccac-104b-434b-acf3-01249fec3bb2
    Total devices 4 FS bytes used 211.36GiB
    devid    3 size 2.73TiB used 106.53GiB path /dev/sdb
    devid    4 size 2.73TiB used 106.53GiB path /dev/sdc
    devid    5 size 2.73TiB used 106.53GiB path /dev/sde
    devid    6 size 2.73TiB used 106.53GiB path /dev/sda

But it took awhile to get there.

I sent in the logs via email.

1 Like