Raid is not created

Hi,

I tried to make some raid configuration for BTRFS but I get error messages which I do not understand. Also Rockstor GUI shows it was working after a reboot. It seems I cannot copy any data to it any more.
I make a link to another post, perhaps this is somehow related to each other

What I did:

  1. created a pool with a single disk
  2. copied data to free another disk which I added to the pool and changed to raid 1
  3. copied more data and changed to raid5 with 2 disks
  4. added another disk and changed to raid 5 again 3 disks should work.
  5. wanted to copy more data but now stuck because of no disk space because seems no raid5 created.

I am not really familiar with the btrfs command line tools but this is some information I found.

[root@Homeserver none]# dmesg | tail
[43399.549028] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.584448] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.621839] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.658563] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.700355] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.746557] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.819991] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.857595] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43399.899359] BTRFS info (device sde3): qgroup_rescan_init failed with -115
[43402.867054] BTRFS info (device sde3): qgroup scan completed (inconsistency flag cleared)

[root@Homeserver none]# btrfs filesystem df /mnt2/p1_r1
Data, single: total=2.70TiB, used=2.70TiB
Data, RAID5: total=761.27GiB, used=761.25GiB
System, single: total=4.00MiB, used=400.00KiB
Metadata, single: total=4.00GiB, used=2.82GiB
Metadata, RAID5: total=2.00GiB, used=1.54GiB
GlobalReserve, single: total=512.00MiB, used=43.70MiB

I do not know what went wrong or why it went wrong or how I can fix it.
But if this cannot be converted properly to a raid5 now, I gonna lose data. thanks for helping

Hi @herbert. This output gives some clues. I think your steps 2 and 3 choked BTRFS a bit and there’s not enough room for it to balance properly. When you resize a pool, it triggers a btrfs balance job which redistributes your data according to the new raid profile requested and the disk set. We need to make sure this happened properly before proceeding. Currently, the Web-UI is not smart enough convey the exact internal state. I’ve created this issue to address this problem.

Now, back to repairing your Pool. We need to do some balancing on the command line. Can you provide the output of these commands: btrfs fi show and blkid?

Hi @suman

I read a little btrfs documentation and the basics are not that hard :wink:
I tried to do the balancing on the CLI and it really gave me the infformation needed. No disk space left for balancing. This information would be really nice to have on the Web-UI as well. Would made my life easier. Also the information of the Web-UI does not reflect the real raid level only the one chosen frome the dropdown.
This should be changed.

I freed up some space and are doing the balancing right now, so I trust btrfs to fix the issue now. I created a RAID0 and do balancing now and add the last disk after migration and change to raid 5 afterwards using web-ui.
Lets see if this going to work

By the way the outputs of your commands :wink:

btrfs balance status /mnt2/p1_r1
Balance on ‘/mnt2/p1_r1’ is running
18 out of about 3241 chunks balanced (19 considered), 99% left
[root@Homeserver]# btrfs fi
usage: btrfs filesystem [] []

btrfs filesystem df [options] <path>
    Show space usage information for a mount point
btrfs filesystem show [options] [<path>|<uuid>|<device>|label]
    Show the structure of a filesystem
btrfs filesystem sync <path>
    Force a sync on a filesystem
btrfs filesystem defragment [options] <file>|<dir> [<file>|<dir>...]
    Defragment a file or a directory
btrfs filesystem resize [devid:][+/-]<newsize>[kKmMgGtTpPeE]|[devid:]max <path>
    Resize a filesystem
btrfs filesystem label [<device>|<mount_point>] [<newlabel>]
    Get or change the label of a filesystem
btrfs filesystem usage [options] <path> [<path>..]
    Show detailed information about internal filesystem usage .

overall filesystem tasks and information
[root@Homeserver]# blkid
/dev/sda: LABEL=“p1_r1” UUID=“f7ed3e88-c3fc-404a-bc58-e1c862bbb9ec” UUID_SUB=“ddbcd688-e3ee-42be-a9ae-dffc0308ab25” TYPE=“btrfs”
/dev/sdb: UUID=“90b5fde6-e03c-4ddd-9f6d-3cff102cf47b” UUID_SUB=“7c6c2e68-4b51-4c95-a861-0d03238b2872” TYPE=“btrfs”
/dev/sdc: LABEL=“p1_r1” UUID=“f7ed3e88-c3fc-404a-bc58-e1c862bbb9ec” UUID_SUB=“9dbb5892-de48-4cf0-b4d9-ff95b346432f” TYPE=“btrfs”
/dev/sdd: LABEL=“p1_r1” UUID=“f7ed3e88-c3fc-404a-bc58-e1c862bbb9ec” UUID_SUB=“5aa24837-722c-4f4b-bd11-1c6e9483a9ac” TYPE=“btrfs”
/dev/block/8:67: LABEL=“rockstor_rockstor” UUID=“4c4a3b31-41e1-49fe-bb75-5655973c6db5” UUID_SUB=“f1c1a5bd-3a19-4ba4-945d-6c6efee6f76a” TYPE=“btrfs”
/dev/block/8:65: UUID=“4e522f51-4161-4d13-9812-f16e88d0c9f3” TYPE=“ext4”
/dev/block/8:66: UUID=“476bc9c6-e2e2-41a0-b933-4a373b4143fd” TYPE=“swap”
[root@Homeserver]#

as I said, I figured it out own my own using CLI which should not be necessary if Rockstor Web-UI would give the complete error of the balance in my case or a translated, more human readable error message or prohibits the issue at all.

I keep you updated or can I provide something else?

I am very glad to hear that you are finding it reasonable to work the btrfs commands. Saves me support time so I can spend more time on development. We are on the same page about Web-UI improvements, they are on the way and just a matter of time.

I appreciate that you are sharing the steps you are taking to fix this issue. It will help (1) other users that may encounter the same problem and (2) developers to fix stuff faster/better by providing valuable feedback.

I do need some more information from you for the same two points mentioned above.

  1. Please share the output of this command: btrfs fi show (in your previous reply, the argument show is missing.)
  2. Do you have two Pools now that are both being balanced? Is your end goal to have 2 RAID5 Pools?

Hi,

my endgoal is 1 raid 5 pool, but because of too less disks I have to move date from my old disks to raid0 pool, add the disk where the data was moved to the pool afterwards. when all data is migrated to the raid0 pool, i gonna convert it to raid5.
first I thought I can start with a degrade raid5 pool to save some time, this was not working as I realized.
at the moment I do the balancing after deleting some data which was not needed any more so balancing has enough space to work.

btrfs fi show
Label: ‘rockstor_rockstor’ uuid: 4c4a3b31-41e1-49fe-bb75-5655973c6db5
Total devices 1 FS bytes used 2.27GiB
devid 1 size 103.43GiB used 71.02GiB path /dev/sde3

Label: ‘p1_r1’ uuid: f7ed3e88-c3fc-404a-bc58-e1c862bbb9ec
Total devices 3 FS bytes used 2.70TiB
devid 1 size 2.73TiB used 2.73TiB path /dev/sdd
devid 2 size 2.73TiB used 596.01GiB path /dev/sdc
devid 3 size 465.76GiB used 324.01GiB path /dev/sda

Label: none uuid: 90b5fde6-e03c-4ddd-9f6d-3cff102cf47b
Total devices 1 FS bytes used 2.56TiB
devid 1 size 2.73TiB used 2.72TiB path /dev/sdb

btrfs-progs v4.1.2

the none pool is one of my old disks I copy data from.

1 Like

@suman
After 1,5 day(s) of balancing the system has now balanced the disks to raid5, as I wanted at the very beginning. Now I have to convert to raid0 because of too less disk space in the raid5 (haven’t considered that at the beginning) and wanted to do that using the Web-UI which failed.
Using the CLI I figured out why, and this has to be fixed in the Web-UI, otherwise raid5 to raid0 or something similar won’t work.

This is the log and the command which “forced” the balancing to raid0. Force is needed because data integrity gets degraded which btrfs is not accepting without an extra switch.

[ 79.385005] BTRFS info (device sde3): qgroup scan completed (inconsistency flag cleared)
[ 185.773505] BTRFS error (device sda): balance will reduce metadata integrity, use force if you want this
[ 195.680467] BTRFS error (device sda): balance will reduce metadata integrity, use force if you want this
[root@Homeserver p1_r1]# btrfs balance start -f -dconvert=raid0 -mconvert=raid0 /mnt2/p1_r1

Bummer. You might want to add a temporary USB drive to mitigate space issues. Thanks for putting btrfs and Pool resize to test and sharing all the details. Please continue to do so. It’s helpful as we go about enhancing the Web-UI.

hmm using an usb disk, should have thought about that earlier :frowning:
adding another disk would result in another balancing process, this takes ages on my setup. I am still running the raid5 to raid0 balancing.

nevertheless, one other thing to consider, the web-ui should be locked to make changes as long as there is a balancing running in the background. because otherwise you add a new disk, get a failure in the logs, but the disk is never used because the process ist not started again automatically, but the web-ui tells that everything is fine if you do not digg deeper.

We should really put together a guide for migrating data over to Rockstor. I’ve created a doc issue here, so we don’t lose track. Contributions are welcome!

1 Like