Rockstor Not Recognizing Expanded Storage Size

I run Rockstor 5.0.8 (Leap 15.5) in a ProxMox VM. I needed more space, so in ProxMox I grew the disk size that’s used for /dev/sdc from 10TB to 20TB. The new disk size is reflected at the kernel level in opensuse:

rockstor:~ # dmesg | grep blocks
[    3.336762] sd 0:0:0:0: [sda] 134217728 512-byte logical blocks: (68.7 GB/64.0 GiB)
[    3.339867] sd 2:0:0:3: [sdc] 42949672960 512-byte logical blocks: (22.0 TB/20.0 TiB)
[    3.340195] sd 1:0:0:1: [sdb] 67108864 512-byte logical blocks: (34.4 GB/32.0 GiB)

However it’s not reflected in the Rockstor UI, no matter how often I click the Rescan button. So I can’t expand the btrfs pool to take advantage of the new size.

How do I resolve this?

I swear I thought I had done this successfully before in Rockstor 4.6.3 on tumbleweed, but maybe I’m wrong?

When you grew the disk with Proxmox, did you also increase the partition to fill the resized disk, and used the

btrfs filesystem resize <devid> max <mnt> to grow the file system?

(btrfs-filesystem(8) — BTRFS documentation)

I think at the least you need to have a partition sized up to cover the disk …

1 Like

I’m confused. Is this not a list of raw devices in the system?

And if a disk is bigger than its pool, should there not be an “expand pool” option in the pools UI?

I created these pools directly on the raw devices using the UI for everything. My understanding was that no partitions are created when doing it this way. There’s no partition table on this disk. It’s just a “whole disk” btrfs volume

You weren’t wrong, that is the correct command to accomplish that. In my case it was
btrfs filesystem resize max /mnt2/rockpool

But I’m still not understanding why the UI didn’t see the larger disk, or why it wouldn’t let me expand the volume from there. It seems like the Disks page is only displaying btrfs volumes on the default devid rather than actual disks, which I know isn’t the case because I’ve had specific partitions listed there before that did not have anything to do with btrfs.

And I swear I’ve done exactly this before without issues on 4.6.3

@GoreMaker yes, this is curious on the raw device piece … I don’t have an answer, but I was able to replicate your observation (albeit using VirtualBox and not proxmox) on 4.5.8-0 on Leap15.5 (no kernel backport installed).

Growing a drive from 6GB to 8GB using VBox’s media manager. The disk page should (as you mentioned) show the increased size. But, like in your case, it doesn’t for me either:


It should show 8GBs, but doesn’t.

The underlying code to scan the disks can be found here:

using lsblk

When executing that command outside of Rockstor the UI, i.e., on the command line:

lsblk -p -p -o NAME,MODEL,SERIAL,SIZE,TRAN,VENDOR,HCTL,TYPE,FSTYPE,LABEL,UUID

(do not remember why the second p option is there, but just to follow what the coding is doing), the output is this:

NAME        MODEL        SERIAL          SIZE TRAN   VENDOR   HCTL       TYPE FSTYPE LABEL      UUID
/dev/sda    VBOX HARDDIS VBa2572f87-ff8   12G sata   ATA      0:0:0:0    disk
├─/dev/sda1                                2M                            part
├─/dev/sda2                               64M                            part vfat   EFI        6B75-B8F6
├─/dev/sda3                                2G                            part swap   SWAP       47109a0d-29c5-4472-9d1e-a16efe70c2c7
└─/dev/sda4                              9.9G                            part btrfs  ROOT       d0c1da4b-39d9-418c-a5f0-8077210d7f75
/dev/sdb    VBOX HARDDIS VB88af474c-d1e    6G sata   ATA      1:0:0:0    disk btrfs  rocksalami 9862f48a-d953-4759-8002-d826185c473c
/dev/sdc    VBOX HARDDIS VBc83a5941-724    6G sata   ATA      2:0:0:0    disk btrfs  rocksenf   4094b6a3-efb6-46b2-9408-1777de9c0df5
/dev/sdd    VBOX HARDDIS VB94d95720-603    6G sata   ATA      3:0:0:0    disk btrfs  rocksalami 9862f48a-d953-4759-8002-d826185c473c
/dev/sde    VBOX HARDDIS VB3293073a-4cc    6G sata   ATA      4:0:0:0    disk btrfs  rocksalami 9862f48a-d953-4759-8002-d826185c473c
/dev/sdf    VBOX HARDDIS VBd089f450-ffc    8G sata   ATA      5:0:0:0    disk btrfs  rocksenf   4094b6a3-efb6-46b2-9408-1777de9c0df5
/dev/sr0    VBOX CD-ROM  VB2-01700376   1024M ata    VBOX     7:0:0:0    rom

and as one can see /dev/sdf shows up as 8GB there.

I suspect, because it’s not a “new” disk, the updates done during this method:

does not update the size of the disk, as it recognizes it as an existing one that’s already on a btrfs filesystem, but I have not analyzed the code here in detail. Both of these function have not been updated really in the last 2 years. If your memory is correct then the only thing I can imagine is that TW provides some different output of the lsblk command that would make Rockstor think it’s actually a new disk (if my assumption above is correct).

In summary, I suspect that the assumption has mostly been that a production Rockstor would most likely run on bare metal and hence a magic growth of a disk size would not usually occur.

@phillxnet, since you’ve delved into this area time and time again, maybe you can shed some more light on this, if you have time.

To your point above, at this time it seems that one doesn’t need to do any partitioning (as you rightly pointed out) but would run (at the command line) a resizing of the btrfs system of the mounted filesystem.

2 Likes

@Hooverdan Thanks for the reproducer here. Re:

We have a reproducer for this now, and as stated we assume / strongly suggest hardware drives: which simply don’t do this. But we can/should accommodate without too much effort. Nothing has, as you say, changed in this area for some time: due to the focus of our more recent testing phases.

Could you create an issue: given you have a reproducer, siting @GoreMaker as the reporter.

I would not be inclined to tent to this issue in the current testing phase though (we are on RC3 already), or likely want to merge a change this low-down that does not relate to our supported configurations as per recommendations in our:

https://rockstor.com/docs/installation/quickstart.html#minimum-system-requirements

Our current next stable Milestone does have a simple bug to be addressed: but that pertains to a recommended configuration: real drives.

But our next testing phase (not long now) would be a good one to tend to this. We must have tests at this level of the code though.

The btrfs need to be expanded to fit larger host drives is covered in a tangentially related situation covered in our docs re drive in-situation replacment (via btrfs replace) here:

https://rockstor.com/docs/data_loss.html#resize-larger-replacements

I.e. the filesystem does not auto-expand when a member increases it’s size: even if that drive has no partitions. When the filesystem was created the available member sizes is set. Changes to those drive sizes are not auto honoured. But one can request, as per that doc entry, that a Pool member be expanded to fit any sub btrfs expansion that may have taken place.

@GoreMaker Another nice find. Thanks. This would be nice to tend to, but any layer between btrfs and the hardware is a reduction of data security; ergo not ideal. However we are moving: bit by bit, to be able to cope with progressively more changing circumstances as we go. But out Web-UI is a little stinted currently re reporting dynamic changes. And we hope to address in more in the next testing phase as we improve/update things more in the Web-UI area. And adding a more dynamic/live nature nature is definitely on the cards. But by take a while as we have to re-architect a few bits and bobs. Report you findings once you have expanded (via cli) the btrfs on that particular pool member. You may find things resolve themselves there-after. And this would be contextual info on @Hooverdan to-be-created issue.

I.e. from the docs:

btrfs filesystem resize DevID:max /mnt2/pool-name

Hope that helps.

1 Like

Thank you for this clarification.

Just to address the use of Rockstor as a VM: while running Rockstor as a VM may not be an initially-foreseen use case, it’s become a very common setup. I have a 2x 22-core server with 512GB of memory that cost me next-to-nothing (the used server market is a treasure trove of opportunities these days) and it would be immensely wasteful to run just Rockstor on such a computer. I specifically chose Rockstor because it takes advantage of the btrfs file system, while TrueNAS leverages ZFS. The btrfs filesystem is more than happy living on virtual block devices, while ZFS needs direct disk access to perform well (I do use ZFS as the underlying storage system in ProxMox). So Rockstor becomes an excellent candidate for running a dedicated file server as a VM.

I also considered OpenMediaVault since it supports a vast range of filesystems, but ultimately the higher number of features included in Rockstor means I can make it more useful within a single VM rather than having to create more separate VMs to meet my needs. This is a pretty sweet setup.

3 Likes

I also want to add that I love how responsive you all are to feedback. Encourages me to stick with Rockstor even more and explore more of its features :grinning:

4 Likes

FYI, here is the newly created issue on github:

Take a look.

3 Likes