I tried to make some raid configuration for BTRFS but I get error messages which I do not understand. Also Rockstor GUI shows it was working after a reboot. It seems I cannot copy any data to it any more.
I make a link to another post, perhaps this is somehow related to each other
What I did:
created a pool with a single disk
copied data to free another disk which I added to the pool and changed to raid 1
copied more data and changed to raid5 with 2 disks
added another disk and changed to raid 5 again 3 disks should work.
wanted to copy more data but now stuck because of no disk space because seems no raid5 created.
I am not really familiar with the btrfs command line tools but this is some information I found.
[root@Homeserver none]# dmesg | tail
[43399.549028] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.584448] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.621839] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.658563] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.700355] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.746557] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.819991] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.857595] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.899359] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43402.867054] BTRFS info (device sde3): qgroup scan completed (inconsistency flag cleared)
I do not know what went wrong or why it went wrong or how I can fix it.
But if this cannot be converted properly to a raid5 now, I gonna lose data. thanks for helping
Hi @herbert. This output gives some clues. I think your steps 2 and 3 choked BTRFS a bit and there’s not enough room for it to balance properly. When you resize a pool, it triggers a btrfs balance job which redistributes your data according to the new raid profile requested and the disk set. We need to make sure this happened properly before proceeding. Currently, the Web-UI is not smart enough convey the exact internal state. I’ve created this issue to address this problem.
Now, back to repairing your Pool. We need to do some balancing on the command line. Can you provide the output of these commands: btrfs fi show and blkid?
I read a little btrfs documentation and the basics are not that hard
I tried to do the balancing on the CLI and it really gave me the infformation needed. No disk space left for balancing. This information would be really nice to have on the Web-UI as well. Would made my life easier. Also the information of the Web-UI does not reflect the real raid level only the one chosen frome the dropdown.
This should be changed.
I freed up some space and are doing the balancing right now, so I trust btrfs to fix the issue now. I created a RAID0 and do balancing now and add the last disk after migration and change to raid 5 afterwards using web-ui.
Lets see if this going to work
By the way the outputs of your commands
btrfs balance status /mnt2/p1_r1
Balance on ‘/mnt2/p1_r1’ is running
18 out of about 3241 chunks balanced (19 considered), 99% left
[root@Homeserver]# btrfs fi
usage: btrfs filesystem [] []
btrfs filesystem df [options] <path>
Show space usage information for a mount point
btrfs filesystem show [options] [<path>|<uuid>|<device>|label]
Show the structure of a filesystem
btrfs filesystem sync <path>
Force a sync on a filesystem
btrfs filesystem defragment [options] <file>|<dir> [<file>|<dir>...]
Defragment a file or a directory
btrfs filesystem resize [devid:][+/-]<newsize>[kKmMgGtTpPeE]|[devid:]max <path>
Resize a filesystem
btrfs filesystem label [<device>|<mount_point>] [<newlabel>]
Get or change the label of a filesystem
btrfs filesystem usage [options] <path> [<path>..]
Show detailed information about internal filesystem usage .
as I said, I figured it out own my own using CLI which should not be necessary if Rockstor Web-UI would give the complete error of the balance in my case or a translated, more human readable error message or prohibits the issue at all.
I keep you updated or can I provide something else?
I am very glad to hear that you are finding it reasonable to work the btrfs commands. Saves me support time so I can spend more time on development. We are on the same page about Web-UI improvements, they are on the way and just a matter of time.
I appreciate that you are sharing the steps you are taking to fix this issue. It will help (1) other users that may encounter the same problem and (2) developers to fix stuff faster/better by providing valuable feedback.
I do need some more information from you for the same two points mentioned above.
Please share the output of this command: btrfs fi show (in your previous reply, the argument show is missing.)
Do you have two Pools now that are both being balanced? Is your end goal to have 2 RAID5 Pools?
my endgoal is 1 raid 5 pool, but because of too less disks I have to move date from my old disks to raid0 pool, add the disk where the data was moved to the pool afterwards. when all data is migrated to the raid0 pool, i gonna convert it to raid5.
first I thought I can start with a degrade raid5 pool to save some time, this was not working as I realized.
at the moment I do the balancing after deleting some data which was not needed any more so balancing has enough space to work.
btrfs fi show
Label: ‘rockstor_rockstor’ uuid: 4c4a3b31-41e1-49fe-bb75-5655973c6db5
Total devices 1 FS bytes used 2.27GiB
devid 1 size 103.43GiB used 71.02GiB path /dev/sde3
Label: ‘p1_r1’ uuid: f7ed3e88-c3fc-404a-bc58-e1c862bbb9ec
Total devices 3 FS bytes used 2.70TiB
devid 1 size 2.73TiB used 2.73TiB path /dev/sdd
devid 2 size 2.73TiB used 596.01GiB path /dev/sdc
devid 3 size 465.76GiB used 324.01GiB path /dev/sda
Label: none uuid: 90b5fde6-e03c-4ddd-9f6d-3cff102cf47b
Total devices 1 FS bytes used 2.56TiB
devid 1 size 2.73TiB used 2.72TiB path /dev/sdb
btrfs-progs v4.1.2
the none pool is one of my old disks I copy data from.
@suman
After 1,5 day(s) of balancing the system has now balanced the disks to raid5, as I wanted at the very beginning. Now I have to convert to raid0 because of too less disk space in the raid5 (haven’t considered that at the beginning) and wanted to do that using the Web-UI which failed.
Using the CLI I figured out why, and this has to be fixed in the Web-UI, otherwise raid5 to raid0 or something similar won’t work.
This is the log and the command which “forced” the balancing to raid0. Force is needed because data integrity gets degraded which btrfs is not accepting without an extra switch.
[ 79.385005] BTRFS info (device sde3): qgroup scan completed (inconsistency flag cleared)
[ 185.773505] BTRFS error (device sda): balance will reduce metadata integrity, use force if you want this
[ 195.680467] BTRFS error (device sda): balance will reduce metadata integrity, use force if you want this
[root@Homeserver p1_r1]# btrfs balance start -f -dconvert=raid0 -mconvert=raid0 /mnt2/p1_r1
Bummer. You might want to add a temporary USB drive to mitigate space issues. Thanks for putting btrfs and Pool resize to test and sharing all the details. Please continue to do so. It’s helpful as we go about enhancing the Web-UI.
hmm using an usb disk, should have thought about that earlier
adding another disk would result in another balancing process, this takes ages on my setup. I am still running the raid5 to raid0 balancing.
nevertheless, one other thing to consider, the web-ui should be locked to make changes as long as there is a balancing running in the background. because otherwise you add a new disk, get a failure in the logs, but the disk is never used because the process ist not started again automatically, but the web-ui tells that everything is fine if you do not digg deeper.
We should really put together a guide for migrating data over to Rockstor. I’ve created a doc issue here, so we don’t lose track. Contributions are welcome!