Consolidate Multiple Shares Into Directories On One

Hello,

First off, I’m new to the forum, but I’ve been using rockstor for a while now. Love the software – great option for those wanting a btrfs nas solution that can run on arm.

I’m hoping for a solution to a specific problem I didn’t realize I had until recently. My use-case is pretty typical. I run rockstor on a dedicated machine in my home network with 2x8TB HDDs in RAID0, exporting shares over NFS. In addition to this I have a raspberry pi running your run-of-the-mill media server stuff in docker using linuxserver images (jellyfin, torrents, *arr apps, etc).

I recently took an interest in optimizing my setup, and discovered TRaSH Guides. Reading through the section on hardlinks, I realize where I’ve gone wrong.

The section provides tips on how to properly set up shares and directory structure for various platforms (I think the guide that most closely matches rockstor is unRAID). Specifically it describes how it is not a good idea to create and export multiple shares, for example, a share for books, a share for movies, etc, but to have one share, with different categories of media organized into subfolders. This is important because in order to create hardlinks (performed by the *arr apps), target files need to exist on the same logical device.

Unfortunately I have done exactly what I wasn’t supposed to do, and have gone with the share-per-category approach. I know that I can fix it, but the methods known to me are rather unpalatable. Because of the limited remaining space in my pool, I can purchase more drives, and create an additional pool with the desired configuration and transfer everything over, or I can create a new share on the same pool, and painstakingly migrate everything by hand, resulting in significant downtime for my services.

With all that background out of the way, is there a better solution I can try? I’m not too knowlegeable on btrfs, but I’m hoping that it will have some special sauce I can utilize with snapshots or something.

I’m grateful for any tips!

@apriestley Welcome to the Rockstor community forum.

I can chip in just quickly/a little with this one. Others can hopefully point to some proposals re allowing the use of directories within a Share for our Rock-ons.

The advice you have read regarding apparent share management pertains more to directory management. Hardlinks cannot span filesystems. Each Rockstor Share is essentially a file-system (sharing it’s metadata with other shares within the same Pool). We intentionally use the Share concept as a Share of the Pool’s space. But it is also, in btrfs speak a sub-volume. Pool = Btrfs Volume, Share = Btrfs sub-volume. And confusingly a btrfs subvolume (Share in Rockstor speak) actually appears as a directory in the parent btrfs volume (Pool in Rockstor speak). So our underlying filesystem already uses directories to separate it’s subvols, we piggy-back on this concept: but mount each share individually also to enable more apparent seperation.

We hinge our entire Web-UI design around a simple separation of concerns. Each Share (subvolume) is a unit of storage from it’s associated Pool (volume). This is sound, and has worked for us, but we have had folks request the ability to work with regular directories within a single Share - such as your requesting could be read. This would be a major re-work and is not appropriate at this time. And I personally think it’s benefits are outweighed by the significant increase in complexity. We just don’t have the ongoing contributor count to maintain that level of increased complexity. But it may be far more approachable in the scope of Rock-ons only!

But that is fine. And keep in mind that you can also have multiple Pools per system: if hardware allows.

Apologies for not being able to spend more time on this response.

Hope that helps.

1 Like

Thanks for the reply!

We hinge our entire Web-UI design around a simple separation of concerns. Each Share (subvolume) is a unit of storage from it’s associated Pool (volume). This is sound, and has worked for us, but we have had folks request the ability to work with regular directories within a single Share - such as your requesting could be read.

So just to clarify, I’m not asking for a new Web UI feature. I’m totally okay with fixing my issue in the terminal – Rather I’m seeking advice on how to go about it.

After I posted initially, I re-read some of the interface documentation, and this section caught my attention.

So if I use the btrfs cli I can do something like btrfs subvolume snapshot /mnt2/movies /mnt2/data/media/movies this creates a reference to the data in the original share, but:

  • Files present in a Share when a Snapshot is taken are preserved in it even if they are deleted in the original Share afterwards.

Implies that the data is referenced by both the original share, and the snapshot. Confusing btrfs concepts, but I kind of get it. What happens to the data if I delete the original share (will I even be able to or is the snapshot dependent on the share)?

  • A Snapshot can be cloned to become a brand new Share

Is this what I should do after snapshotting the share as above? So that the data can exist independently in it’s new subvolume so that I can safely remove the original share? At what point is the data physically replicated if at all? Due to limited remaining space I need to avoid copying as much as possible.

Is it? Using an NFS client connected to an export of two shares, if I try to hard link a file within one share to the other, I get the cross-device error.

Do you actually need the hard links?

All my *arr apps work, either as rock-ons on the same machine as rockstor, or on a separate system if I mount the NFS share and point the *arr app to the appropriate “root folder”.

1 Like

They aren’t strictly required for the *arr apps to work, but according to the guide I posted previously, not having them leaves a lot of optimization potential on the table. When new downloads are imported using the *arr apps, they are hardlinked if possible. If hardlinking is not possible, they are copied instead, which takes much longer, and doubles the storage requirement for those files until they are removed from the original download location.

I almost have the solution figured out – it cannot be done using the rockstor web-ui, but it has to do with snapshotting existing subvolumes (shares) to the new share as a subdirectory followed by setting the default subvolume to said snapshot so that the original can be deleted without destroying the data. A problem I’m encountering with this is an apparent btrfs send operation from the original subvolume that never seems to finish. Not sure what the cause of it is yet.

There was another recent thread on btrfs send/replication, perhaps that might provide some clues.