Virtualbox install extra disks not seen in GUI & best practise for RAID1

I’ve been procrastinating about setting up a home NAS for a long time. I gave up with the idea of freeNAS/ZFS as couldn’t justify the outlay and didn’t want to use BSD. Then I stumbled upon Rockstor and it seems like the best option for me.
I’m testing an install in virtualbox but things are not going smoothly.

I’ve created a 8Gb primary VDI disk and 4 extra VDI, 2x 100Mb and 2x 200Mb. Fixed disk sizes for those 4.
Installation no problem but in the GUI I cannot see the extra 4 disks.
Login to command line and parted told me the disk label was invalid so I set all 4 to gpt.
The OS sees the drives no problem. There are no partitions on any of them.
The GUI however just doesn’t see them at all.

Any ideas where I might be going wrong?

The end game is to install on hardware with 6 drives, RAID1 config, 2x 2TB, 2x 3TB, 2x 1.5TB as one big pool
Is this best practice or would I be better off with 3 separate pools sized for each of the drive sizes ?
RAID1 for availability only. Backups and replication will be done to a second box off site & local USB (or something!) snapshots.

I’ll be running MQTT, emby and tvheadend dockers on top of it all with some DVB-T cards installed.
E5800 16Gb ram, 9SATA & M/S PATA.
One machine to rule them all.

To partially answer some of my own questions, it seems that 100mb/200mb is too small of a disk and rockstor doesn’t like them that small.
Creating 4x 2.11Gb VDI images the GUI sees them. No difference in the cmd line.
I see the guide for virtualisation hints at this as it says to create 2Gb disks, doesn’t say that the minimum size tho. What is the minimum size please ?

Disk config suggests only 2Gb of usable space with all of these drives.
If I deselect both of the 1Gb disks I end up with more usable space @ 2.1Gb.
Is this a known bug ? Or part of the issue of multidisk RAID1 ?

The btrfs calculator tool suggests my config would yield 3.1Gb:

@rocklobster Welcome to the Rockstor community.

Yes I read your initial post but the size didn’t spring to mind.
Currently in the code the scan_disks(min_size) function is called with settings.MIN_DISK_SIZE which is defined as:

Minimum disk size allowed is 1GB. Anything less is not really usable. Reduce
this to 100MB if you really need to, but any less would just break things.
MIN_DISK_SIZE = 1024 * 1024

when the Rockstor package was build

But also note the following from the btrfs FAQ


a logical range of space of a given profile, stores data, metadata or both; sometimes the terms are used interchangeably
A typical size of metadata block group is 256MiB (filesystem smaller than 50GiB) and 1GiB (larger than 50GiB), for data it’s 1GiB. The system block group size is a few megabytes.

So it may well be advised to not go below the 1 GB and stick more to the 2 GB and up as these very small drives can display strange behaviour and rapidly run out of space with btrfs’s background space requirements. I’d say a practical minimum is more like 3-4 GB actually, which is still a fraction of a modern USB key.

Hope that helps and thank for bringing this up.

As a result of your post I have opened the following issue on the rockstor/rockstor-doc GitHub repo so that it doesn’t get forgotten.

1 Like

HI Phil,

Good info thanks.
I’d definitely say it’s not a big problem just something to note for people messing around that should indeed know better :slight_smile:
1Gb minimum is fine for testing, after all my drive is 1000 times that size.

Oh and it does look like the calculator page for setting up the pools had a big in it. After ignoring the 2Gb result and creating the pool the next page correctly displayed as per the external calculator as 3.1Gb in total size.
I need to get on github and start looking at contrib.


My understanding is that btrfs raid1 required 2 or more devices and needs to ensure that each chunk is duplicated on 2 different devices, so if all 4 are filled evenly at first then once the 2 X 1GB are full there is only 1GB left on each of the 2 remaining 2GB drives so 2 copies could still be stored on that remaining 1GB (on each) space so I would say the selector could do with improving on this calculation.

As you have now posted the resulting pool is reported correctly.

If you are game do please create a GitHub issue for this as it’s another ‘nice to get sorted’ type thing that just takes the time to do it. From the official docs we have the Contributing to Rockstor - Overview which may be of interest and the main code repository is rockstor/rockstor-core. I would warn against trusting tests with such small volumes though as there is very little space for btrfs to manoeuvre and really is a rock bottom size, and not akin to your 1000 fold actual space. Btrfs would literally only have 4 chunks per 1GB device to play around with which is going to trip something up sooner or later. You may well get out of space errors prior to you expecting them.

But I get what you mean.

Anyway have fun and do let the forum know how it goes.