Available disk space in raid0 not shown correctly?

Hello there,
I am migrating my data over to rockstor and I have run into a weird issue that I can’t figure out.

To start with, I have 2 disks: sda 2TB and sdb 500GB. For now running in raid0, I am planning into changing this into a raid5 with 4 disks at a later time. Raid0 should be striping, so I have roughly 2.5 TB of space. I have migrated about one TB and btrfs tells me the disk is full. What is the issue here, and what can I do to resolve it?

Here is the output of a few interesting commands:

$btrfs fi show /dev/sda
Label: 'data'  uuid: 51fe3ce9-f139-437f-9312-a401988b3c27
    Total devices 2 FS bytes used 930.70GiB
    devid    1 size 1.82TiB used 465.76GiB path /dev/sda
    devid    2 size 465.76GiB used 465.76GiB path /dev/sdb

$btrfs filesystem df /mnt2/data/
Data, RAID0: total=929.51GiB, used=929.50GiB
System, RAID0: total=16.00MiB, used=80.00KiB
Metadata, RAID0: total=2.00GiB, used=1.19GiB

So it looks like btrfs was formatted/initiated with twice the size of sdb. Thats not what I’d expect for raid0, and it’s not what rockstor reports in the ui. (In fact it tells me that I have 0% used space out of 2.27 TB which is also wrong)

I tried resizing btrfs on the terminal using

$btrfs filesystem resize  max /mnt2/data/

to no avail. Where did I/rockstor/btrfs go wrong?

Interesting. This page claims, that raid0 btrfs should be used on disks with the same size. First time I hear of it, anyways, I am now in the process of converting my disks to “single” via commandline. How come rockstor does not offer it? Also I was wondering about “dup”, which sounds much safer in this context. (Not that I would need it right now, since I am still in the transition phase…)

Will I break anything when I convert my raidlevel bypassing rockstor?

raid0 stripes your data, when one of your disks is full this cant be done.
changing raid levels should be possible in the web ui, dup is to my understanding basically -m raid1 which should be enabled by default if not specified otherwise (for you its also raid0).
Changing this in the commandline is possible, but im unsure if the web ui reports the correct raid levels afterwards.

After browsing the internet for a while, I am quite sure I found the biggest culprit for my problems, which was my expectation. Since you hear everywhere, that btrfs is so cool, it let’s you have raid configuration with different sizes and is all clever about it, I somehow expected this to be true for raid0 as well. It’s not. It’s block-PAIR striping, so once the smallest out of two disks is full, it’s over.

The questions remaining are:

  1. Why didn’t Rockstors UI tell me about my true available size? This looks like a bug to me
  2. Why doesn’t Rockstor allow the “single” mode, so you can use your full disk size in cases like mine.

PS: I changed the title to a more meaningful one…

I also saw earlier today that it is indeed possible to create a pool in single mode. But the option didn’t show up in the resize menu once it has been something else. After changing it to single mode on the commandline rockstor claimed it was raid5. It didn’t complain about converting to raid1 from the UI afterwards though, so that should be fine. (I added a third disk, otherwise that obviously wouldn’t make any sense) Rebalancing right now