HDD capacitty is reduced to half

Hi,
I installed rockstor only few days back. I had used btrfs in last few years.
I started with 6tb and 3tb, where 6tb drive was partionied on 2 3tb partitions. I used raid1 with sdb1 and sdc1 and single on sdb2. ( second partition of 6tb drive.
I quickly realized that this setup is not very possible with rockstor, so I bought second 6tb drive. Plan was to use raid1 on 6tb drives and single on 3tb drive.

I did backups, delete wipe 6tb drive which had partitions. change the role in rockstor to use whole drive ( before that i used only first partition ). Unfortunately even I wipe it i still see only half of capacity in web ui ( Storage -> Disk )

Any idea how could i force rockstor to use whole drive?

@rezance Welcome to the Rockstor community.
Re:

Yes, our btrfs in partition capability only allows for one btrfs partition per device. In order to try and keep things simple, both on the Web-UI and for the coding involved we impose some restrictions on our flexibility, and this is one of them.

This is likely a bug and may be only cosmetic. But it is definitely not expected if you have started a-fresh on this pool. Could your give the output of the following command run as root:

btrfs fi show

and a screen capture of the pool details page for the 2 x 6GB drives pool.

Also could you confirm the Rockstor version you are running here:

yum info rockstor

or on newer Rockstor 4 variants:

zypper info rockstor

Cheers.

So first version should be latest testing

# zypper info rockstor
Loading repository data...
Reading installed packages...


Information for package rockstor:
---------------------------------
Repository     : Rockstor-Testing
Name           : rockstor
Version        : 4.0.7-0
Arch           : x86_64
Vendor         : YewTreeApps
Installed Size : 74.4 MiB
Installed      : Yes
Status         : up-to-date
Source package : rockstor-4.0.7-0.src
Summary        : Btrfs Network Attached Storage (NAS) Appliance.
Description    :
    Software raid, snapshot capable NAS solution with built-in file integrity protection.
    Allows for file sharing between network attached devices.

btrfs fi shows following

# btrfs fi show
Label: 'ROOT'  uuid: 87514575-45ec-478d-b433-0f0b96ff504e
        Total devices 1 FS bytes used 2.51GiB
        devid    1 size 109.75GiB used 2.80GiB path /dev/sda4

Label: '2f84e287-4c6a-4750-981e-b19e4db8b16f'  uuid: 2f84e287-4c6a-4750-981e-b19e4db8b16f
        Total devices 1 FS bytes used 1.36TiB
        devid    1 size 2.73TiB used 1.36TiB path /dev/sdc

Label: 'DATA'  uuid: b75b204c-586a-4382-91c9-4563b7993c97
        Total devices 1 FS bytes used 1.49TiB
        devid    1 size 5.46TiB used 1.50TiB path /dev/sdd

Explanation:
2f84e287-4c6a-4750-981e-b19e4db8b16f is original pool which i had before I have installed rockstor. After I installed rockstor and realized that my pools on multiple partition is not how rockstor is supposed to be used, I made pool 2f84e287-4c6a-4750-981e-b19e4db8b16f from raid1 to single, then I removed 6tb drive, so now there is only 3tb drive left in 2f84e287-4c6a-4750-981e-b19e4db8b16f in single mode.

DATA is new pool created from scratch in rockstor. I made that pool from newly purchased 6hdd, my plan was to create new DATA pool, than make 2f84e287-4c6a-4750-981e-b19e4db8b16f from raid1 → single to release old 6tb drive, than add this drive to DATA pool and also change it to raid1. Currently old drive is not used in any pool, it is wiped but it shows only half of capacity.

 # fdisk -l
Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 64536473-33FC-4B32-B87C-49ABD98E7BB7

Device       Start       End   Sectors   Size Type
/dev/sda1     2048      6143      4096     2M BIOS boot
/dev/sda2     6144     73727     67584    33M EFI System
/dev/sda3    73728   4268031   4194304     2G Linux swap
/dev/sda4  4268032 234441614 230173583 109.8G Linux filesystem


Disk /dev/sdb: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: WDC WD60EFRX-68L
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: TOSHIBA DT01ACA3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdd: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN001-2BB1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

This is planned to be 2x 6tb pool ( currently it is single with single 6tb)

When I tried to add addtional old 6tb hdd it looks like this


Only 2.73TB…

I was worry to add it to pool like this as i was unsure about result, I would like to avoid necessary btrfs balancing, which as i read is done automatically after resize opetation in rockstor.

Can I extend DATA pool and make raid1 from command line? or it will confuse rockstor.

Is there any chance to delete that drive from database?
I find this in manual https://rockstor.com/docs/disks.html#detached-disks
According this if disk is not used, I should be able to delete it, there should be trash icon. But I dont have this icon, I triied to removed hdd physicaly from server. I was hoped that maybe after it would be deleted completely from database it could be used again in full capacity:)

Last option for me ( maybe the best ) is just to reinstall rockstor. I did not to too much configuration yet, it would be definitely faster then waiting for re-balancing 1.5tb data again.

PS: I like forum UI a lot, now I understand why you are not so active on reddit.

Thank you for help and warm welcome.

Awww I solved it.
I just needed to hit Rescan button at bottom of Storage -> Disks.
I thought that reboot will do re-scan.

2 Likes

@rezance Thanks for the update and glad you got it sorted.

Fancy that, well done.
Re:

So did I. It may well be we have a bug here where the db is just not updated until forced that way. Likely this is insufficient btrfs dev rescan which I still think we need to do more often. Thanks for the feedback and for your work around. We do rescan the drive regularly for their status but it looks like in the corner case of a drive first having been seen with a partition and it then being wiped we don’t properly refreshing without intervention. Good to know. With a reproducer we can create an issue on GitHub for this one and then track down where we need to ‘press the button’ internally.

We are not on reddit. Only the forum.

Thanks for the report, much appreciated. And well done on finding the work-around.

Cheers.

1 Like

Is this a deliberate design decision or just something you don’t have bandwidth to look at for a while? I have 2 x 3TB and an 8TB, so I plan to split the 8TB into a 6TB and 2TB partition and have 6TB RAID1 and 2TB with no resilience.

I am actually using Btrfs for this rather than FreeNAS/ZFS because I understand this is much easier in Btrfs, so it’s a shame to have restrictions at the UI level.

I may be able to find time to look into it myself at some point, but I don’t want to do that if you wouldn’t want the patches. Or is there a sensible half-way house there, like not supporting creating that configuration in the GUI, but supporting it once it is there?

1 Like

@Hooloovoo Welcome to the Rockstor community.
Re:

It’s a deliberate design decision. Things are a lot simpler throughout the entire design if we only ever have one btrfs ‘source’ per physical/virtual device.

No half way house I’m afraid. Our Web-UI would be rather confused by this arrangement and would likely then become completely useless. We have to entertain such limitations in order to keep the project’s complexity and testing ‘surface’ from exploding. Plus we have far more pressing issues to address prior to even entertaining this degree of flexibility. Such as was mentioned; extending our far cleaner btrfs-in-partition ‘treatment’ to the system disk. As at least then we extend more easily to cope with a likely more popular capability of multi-disk system pools. But that in itself needs a more mature upstream support.

By all means take a look at our code. But all critical path code, such as the disk/pool/share management must be accompanied by tests and all prior tests must also pass without significant modification. This is a non trivial task given we also support full disk LUKS and bcache (latter untested in 4) and quite the test heritage covering both our older legacy CentOS btrfs system disk arrangement and the far more complex (read comprehensive) openSUSE btrfs arrangement. With the latter encompassing such goodies as boot to snapshot which we currently support, within limits. Along with LUKS on bcache backed devices if need be. Hence the desire to migrate our btrfs-in-partition disk-role based approach to the system drive to simplify things so we might move towards greater flexibility in the mid to long term. But that does not include expanding to more than a single btrfs ‘source’ (partition) per device. There would also be a potential explosion in the options regarding how many disks can be removed / added etc as our entire ‘stack’ is based on this single-device/single-btrfs-source assumption. We would have to re-work our entire limits/warnings/storage distinctions to cope with this change. The scale of this change and it’s lowest level nature (at least in Rockstor’s world) would pretty much rule out any suggested change of this type if submitted without extensive proof across our entire capabilities. Include all user ‘fencing’ and some fairly hairy duplication code we have to maintain.

It is of course doable. I’m just not entertaining a change of this type for quite some time just yet I’m afraid. We must first transition to a modern Django and Python before even considering such things unfortunately. But do stick around as you are not the only one who would like to see greater flexibility, it’s just that we must approach our technical dept first. And in doing so hopefully gain all the goodies that come with that.

A good starting point, and one that may also help with recognising an element of our core design, is the following tecnical wiki entry:

You never know, you may even find a cute way to do exactly as you want here. But again I’m not keen on merging such large changes that don’t address our technical dept for quite some time unfortunately. And we have our Rockstor 4 release to finalise. But there after I will be starting a new testing release/branch with a far newer Django but this branch will be broken as per all early testing releases so there will be again, likely, a long road to the next Stable. However I’m hoping that once we have our Python 2 to 3 in-the-bag we can attract more contributors and that can only be good.

Base flexibility has a habit of exploding the number of permutations unless we can contain it. Our current single btrfs partition per device limit is our current brute-force way of containing these permutations/complexities.

Bit by bit, and thanks for you input and interest here; much appreciated.

Hope that helps, at least to identify where our focus lies currently.

3 Likes

Thanks @phillxnet – I appreciate the thorough and prompt response. It certainly makes me more inclined to stick around and help if I can. While your answer doesn’t help my immediate use case, I respect the clarity of vision. I am keen to see you succeed, so good luck!

Groan I’ve done a bit of 2to3 transition myself and I don’t envy you.

If I cannot use Rockstor as an appliance now, I may poke around and see if there are things you’ve done (GUIs/dashboards for Btrfs monitoring, alerting etc) that could be useful on a standard (Ubuntu) server running Btrfs. It would be nice to collaborate rather than build some janky collection of scripts.

1 Like