Possible to Use Partitions not Whole Devices for BTFS

Hi, here’s a combination of a feature request / help request.

I’m not using RockStor yet, but looking at migrating to it.
I am specifically interested because of the BTRFS basis when compared to other NAS applications.
The only thing I don’t like from reading documentation is that the GUI insists on using whole disk block devices for adding disks to a BTRFS pool.

Your online documentation indicates that RockStor will not use existing partitions, and requires you to erase partitions, then use the whole disk block device for BTRFS. I can appreciate the safety factor in avoiding new users over-writing valuable data on disks. I am not sure it is a huge benefit, since the same new user can then go on to erase their existing partitions.

However, I believe there are strong arguments in favour of using partitioned disks, and adding the partition block device to BTRFS pools (rather than the whole disk block device).
Two specific benefits that leap to mind :

  1. If you temporarily connect disks to another OS (or boot a different OS from another drive in the system) then the presence of a partition table stops the other OS from “helpfully” over-writing your data disks.
  2. If you use partitions, you have the ability to not use 100% capacity on SSDs. This allows you to use consumer SSDs (with limited over-provisioning at the flash level) and increase the over-provisioning for the SSDs internal bad block / erase management.

So… two feature requests :
a) Let the GUI use existing partitions (with suitable warnings) for a volume.
b) Add partitioning capability to the GUI.

A help request for how to do something in today’s RockStor :
Can I manually add a BTRFS array to RockStor by tweaking files at the CLI ?
I appreciate from the documentation that it can’t be done in the GUI.
Is it as simple as setting up the BTRFS filesystem (manually at the CLI) and adding the filesystem to fstab ?

Thanks,

M.

I think there was a feature request for this already. The answer was that right now it might overcomplicate thinks but i do agree that i would be great. (mostly now that share quotas are not working)

i hadnt had any problem with the first point, but i absolutely agree on the second point, this one from oct 2014 might be the one u wrote about @weedv2, escpecially as soon as a ssd cache is possible this would be a huge benefit.

I do think that share quotas will start working again in a month or so and hopefully, not be a problem again.

@weedv2 You have laid out a very good argument for supporting partitions. Thanks! Others have voiced similar opinions in the past and I’ve added a link to this topic on the github issue for this feature. Perhaps a good approach is to have something like “enable hacker mode” which will enable more experimental features like these. In any case, this feature is not something I want to develop right now. I’d like to revisit after hitting a couple more milestone. Perhaps other contributors will join and help us get there faster.

If your BTRFS filesystems are on whole drives, you can use the import feature on the UI and it will import them. If they are partitions, Rockstor won’t touch them. So you can indeed manage them outside of Rockstor with your scripts etc… It’s not something we can support, but at least the software doesn’t and shouldn’t interfere.

I agree that is not high priority right now. I would say Med/Low priority? I believe its going to be great since it would allow different redundancy levels depending on what we are storing. Im about to build a 6 Disk NAS and ill have to chose to either RAID 10 all of them, or RAID 10 only 4, and 2 RAID 1 or 0… that greatly reduces the flexibility.
I come from StorageSpaces where doing different redundancy per volume is possible.

I also agree that you might want to hide it under an “Advanced Options” panel or the likes to avoid begginers doing crazy stuff.

1 Like

Thanks for all the replies so far folks.

I have to admit the argument of another OS blasting whole disks is a strange corner case; most people (myself included) will have a dedicated NAS.

For me, GUI partitioning tools are a really low priority… if you want to do something with partitions, you ought to be comfortable enough to use an OS level partitioning tool.

Being able to “import” a pool that lives on partitions would be my highest priority argument.

@weedv2 - prompt response :smile:
I’m not worried about the quota management on shares for my current use, but I can appreciate that is a bigger problem for the community at large.

@suman - thanks for the comments from the developers’ perspective.
The “import” for existing pools on whole drives is interesting - I had not spotted that in the documentation.
May I suggest a link in the documentation where you state that you must erase existing filesystems to point out that you can import existing whole-drive pools? Currently the documentation reads that you have to erase existing filesystems.

@suman - again thanks for comments that Rockstor won’t manage volumes on partitions.
I have no problem with managing the partitions / pools outside of Rockstor.
I need to know whether the other features (setting up shares etc) will work within Rockstor.
I guess it may be time for me to grab some trial hardware and have a quick play :smile:

Thanks for the feedback folks,

Mark.

@marks Thanks for pointing our the shortcoming in the documentation. Our gracious moderator @phillxnet has update the documentation. You can read it here.

Hello,

can you hint me at a hack how to import a partition into Rockstor?

Rationale:
I have two SSD in my Machine, One with 16G and one with 120G. I would like to use the first 16G of the latter in combination with the 16G as a Raid1 as soon as Rockstor supports this for the Boot-Drive.

The Rest of the 120G shall be used for RockOns etc.
I am aware that this is unsupported, but how can I manually import the volume residing on the rest of the drive?

Regards,
Hendrik

Hello,

if I see it right, what I have in mind (i.e. the second partition on the root drive) is supported:

    if ((not is_root_disk and not is_partition) or
            (is_partition and is_btrfs)):
        # We have a non system disk that is not a partition
        # or
        # We have a partition that is btrfs formatted
        # In the case of a btrfs partition we override the parted flag.
        # Or we may just be a non system disk without partitions.
        dmap['parted'] = False
        dmap['root'] = False  # until we establish otherwise as we might be.
        if is_partition and is_btrfs:
            # a btrfs partition
            if (re.match(base_root_disk, dmap['NAME']) is not None):
                # We are assuming that a partition with a btrfs fs on is our
                # root if it's name begins with our base system disk name.
                # Now add the properties we stashed when looking at the base
                # root disk rather than the root partition we see here.
                dmap['SERIAL'] = root_serial
                dmap['root'] = True  # now we have base_root_disk name match
                dmap['MODEL'] = root_model
                dmap['TRAN'] = root_transport
                dmap['VENDOR'] = root_vendor
                dmap['HCTL'] = root_hctl
                # and if we are an md device then use get_md_members string
                # to populate our MODEL since it is otherwise unused.
                if (re.match('md', dmap['NAME']) is not None):
                    # cheap way to display our member drives
                    dmap['MODEL'] = get_md_members(dmap['NAME'])
            else:
                # ignore btrfs partitions that are not on our system disk.
                continue

(source: https://github.com/rockstor/rockstor-core/blob/master/src/rockstor/fs/btrfs.py)

@henfri Funny you should be looking there as their is a pending pull request under review that changes this code a little more. It’s undergoing a few changes at the moment, bit by bit. The hope is to make it a little more flexible each time. There are however some less apparent, at least to my reading, complexities there. Ie we track each drive by it’s serial and given there is considered to be only one system drive it can have only one serial, but in the case of the system or more specifically the / mount point partition we also cater, in this case only, for partitions but only as far as labelling the system drive, or more accurately the system partition (/ mount point), as if it were a drive. That is why we display sda3 for example where as all other drives are base drive names, ie sdd. It is appreciated that this is slightly confusing but the hope is to improve things as the code progresses. The pending pull request is the latest attempt to make things more flexible by allowing or being compliant with / not in a partition. This will effect the possibility of having / on for example an md125 dev, rather than a partition there in. This may open up further possibilities but given this code is quite low down and central it can be quite sensitive to apparently harmless changes. The current changes indicated in the pull request are still being tested.

The partition status of a drive is however used later on as a flag for it’s candidacy for wipe so that’s another wrinkle; but in the case of the root drive only, this partitioned status flag is overloaded to avoid the “wipe me” invite on the system disk.

Don’t know if that helps but it’s definitely worth keeping an eye on this as it changes so that if you see an opening for improvement then you can try them out. My concern is that as we open up more flexibility we may encounter huge complexity increases and that makes thing a great deal more difficult to maintain or debug. Hence some of the current limitations, and the bit by bit changes in that area of late.

Was just giving a heads up that it’s currently in flux; in the hope of making things more flexible but not significantly more complex.