Need help with Rockon Addons and space usage

Brief description of the problem

I am in the process of making a RockOn package, UrBackup. Everything seems fine on the docker side, my problem seems on the Data side of thing.

Detailed step by step instructions to reproduce the problem

Here is the UrBackup.json

    "UrBackup": {
        "containers": {
            "UrBackup": {
                "image": "uroni/urbackup-server",
                "launch_order": 1,
                "opts": [
                    ["--cap-add", "SYS_ADMIN" ],
                    ["--net", "host"]

            "volumes": {
                "/var/urbackup": {
                    "description": "Database Location",
                    "label": "Config/Database Storage"
                "/backups": {
                    "description": "Backup Location",
                    "label": "Backup Storage"
            "environment": {
                "PUID": {
                    "description": "Enter a valid UID to run",
                    "label": "UID",
                    "index": 1,
                "PGID": {
                    "description": "Enter a valid GID to use along with the same UID. ",
                    "label": "GID",
                    "index": 2,

            "ports": {
                "55414": {
                    "description": "WebuiPort",
                    "host_default": 55414,
                    "label": "WebUI port",
                    "protocol": "tcp",
                    "ui": true
                "55413": {
                    "description": "FastCGI Web",
                    "host_default": 55413,
                    "label": "HTTPS port",
                    "protocol": "tcp",
                    "ui": false
                "55415": {
                    "description": "Internet Client",
                    "host_default": 55415,
                    "label": "Internet client",
                    "protocol": "tcp",
                    "ui": false
                "35623": {
                    "description": "UDP Discovery Broadcast",
                    "host_default": 35623,
                    "label": "UDP Discovery",
                    "protocol": "udp",
                    "ui": false
    "description": "UrBAckup. <p>Based on UrBackup image: <a href='' target='_blank'></a>.",
    "volume_add_support": true,
    "ui": {
        "slug": ""
    "website": "",
    "version": "Latest"

Here is, maybe, a part that can help solve the problem

Web-UI screenshot


While i don’t mind the 1.1tb in rockstor vs the 1tb in UrBackup (could be a lot of factor) the real problem is that if i try to send/receive (replication) only the 160mb inside the folder BACKUP is send across. Not what we want for a backup.

Hope my question is clear as English is not my primary language.

If you need more information feel free to ask !This text will be hidden

Welcome @Jaune to the Rockstor community. Just some clarifying questions:

The actual UrBackup setup works, meaning the docker container spins up, you can configure and backup the data from clients onto Rockstor into the Backup folder, correct?
The issue you are having is that the actual backup is 1TB worth of data, however when trying to perform send/receive (I assume, using something like rsync, or what are you using?) of that backup folder to another location, it only seems to take a portion of the data?

From reading the link you posted, is Urbackup working in btrfs mode (doesn’t seem to be automatic detection but you have to do a couple of things to make it work)? If it is, then it might be that the send/receive action is not recognizing the sub-volumes that have been generated, right?
From that link you posted:

If UrBackup detects a btrfs file system it uses a special snaphotting file backup mode. It saves every file backup of every client in a separate btrfs sub-volume. When creating an incremental file backup UrBackup then creates a snapshot of the last file backup and removes, adds and changes only the files required to update the snapshot. This is much faster than the normal method, where UrBackup links (hard link) every file in the new incremental file backups to the file in the last one. It also uses less metadata (information about files, i.e., directory entries).

Sorry, not a solution, just wanted to get some more clarity on your issue.


The actual UrBackup setup works, meaning the docker container spins up, you can configure and backup the data from clients onto Rockstor into the Backup folder, correct?


however when trying to perform send/receive (I assume, using something like rsync , or what are you using?)

im using the rockstor send/receive function (which i assume is the same as Btrfs send | ssh btrfs receive ?

of that backup folder to another location, it only seems to take a portion of the data?
It’s only using 160mb of space (i don’t know why) because we can clearly see on the pool capacity usage the 1.33 Tb

From reading the link you posted, is Urbackup working in btrfs mode (doesn’t seem to be automatic detection but you have to do a couple of things to make it work)?

UrBackup make the detection automaticaly.

Since it’s a backup software and i dont really want to share via smb or nfs the data (we can access it via the urbackup web interface). It would be nice if the share by usage show to correct data but …

what i am really after is the replication feature of rockstor to send the backup created to another location to backup the backup.

if that can help

I only have 2 user backing up to this server (for testing purposes) so if im reading the urbackup instroction correctly, i should have 2 subfolder (1 for each user) and a few snapshot in them.


@Jaune Nice set of additional info.


btrfs subvol list /

This will show the info about what is mounted at “/” which is the current subvol that represents the snapshot you are on in the “ROOT” pool. We inherit openSUSE’s boot to snapshot.

What you probably want here is the subvol list of your “data1” pool:

btrfs subvolume list /mnt2/data1

You may also find the usage command of use:

btrfs fi usage /mnt2/data1

Hope that helps. Also note that it takes 5 replication events for Rockstor to settle it’s send/receive rapper. There will, after this, be 3 snapshots at each end. You may just have an earlier copy send/received and after the requried number of replication events has taken place you they will be in sync. I.e. earlier you had less data than now say. And so you are looking back in time to when that event was run. Sorry, still not quite enough to work with. We really need those replication docs to make this clearer.

Hope that helps, at least with getting a clearer picture of whats going on.


After some fidling around.
the result of btrfs subvolume list /mnt2/data1 is

It make sense, since the Backup folder in under /mnt2/ (It’s also the share i give to UrBackup)
the result of btrfs fi usage /mnt2/data1

this is where i don’t understand and the send/receive seem to agree with the Rockstor UI as it only send 221MB
The UI show me that the Backup share only use 221MB of space … but the btrfs usage show me 1.38TB which is roughly the same as what the space use by UrBackup show in the UI

Hope this can help you help me ^^
I would really like to make UrBackup work since it’s working perfectly with btrfs as Rockstor do !
Thank for the hand guys.

@Jaune Hello again.

Could you clarify the following:

Which share is that? To me it reads like /mnt2 is the share you are using. But /mnt2 is just a directory at the root of the “ROOT” pool, or at least in it’s default snapshot root for boot to snapshots :slight_smile: .

Could you clarify the shares you are using for what purpose. Sorry to not be picking up on this better.

/mnt2/data1 is a mount point for the actual root of the entire pool.
/mnt2/Backup is likely a btrfs subvol mount point.

A potential element of confusion here is that we mount the entire pool and it’s subvols at the same ‘level’ in the overall root filesystem. Each Pool subvol is consequently accessible via either it’s independant mount or it’s parent mount. But we require both to be independantly mounted as we do some pool-wide stuff and some subvol specific stuff. The Pool mount is kind of a meta mount of all it’s subvolumes. In btrfs each subvol is presented within the parent pool ‘space’ as a directory; but it’s actually also a kind of filesystem in and of itself (almost). Note also that each subvolume, (share in Rockstor speak) can also have it’s own subvolumes; we generally don’t use this next layer though. But our replication may touch on this from memory.

I’m thinking that you are trying to backup multiple shares, but note that each share is a btrfs subvolume and so is like an independent filesystem in it’s own right. But they are nested and actually share some stuff (some metadata) with the parent pool. But send/receive only concerns itself with a single share (btrfs subvol) and will not include other filesystems (subvols).

So UI and subvol “Backup” and send/receive are concerning themselves with 221 MB. What are you expecting them to be concerned with I think is the info I’m missing here. Again apologies for likely not seeing this exactly as you are but there’s no harm in working on the clarifying the discrepancy there. This is likely down to a conceptual misunderstanding of what send/receive actually does.

This may be down to UrBackup using subvolumes itself. There was talk of this earlier in this thread by @Hooverdan as well. Maybe there are configuration elements that can be tweaked here. Note that your subvol list screen grab indicates many subvols that look to be related and likely non Rockstor related. And so you may be expecting a recursive backup into subvolulmes as they look like directories but they are in fact access point to other subvols.

Hope that helps at least to tease out more of what may be leading to a variety of expectations.


in the urbackup.json (for a rockon addons im trying to make)
in my understanding, i make a share in rockstor named Backup, i then make an addon that use the share


so technically /mnt2/Backup should contain all the data of the container backups which should contain 1.3tb not 200mb since every backup is in /mnt2/Backup/
sorry if it’s not clear im trying my best but english is not my primary language. Also thank you for trying to helping me

1 Like

@Jaune OK thank.



My current guess is that it does, but just by way of some sub subvolumes.
I.e. share (subvol) “Backup” (/backups inside the docker container) is the share handed to UrBackup.
But UrBackup then, in turn, creates subvolumes under that share (btrfs subvolume).
This is legitimate and entirely normal but each of these sub subvolumes is in and of itself independant of the parent sub volume. The total of all sub subvolumes may well be the expected value but the top one only contains the 221 MB.

Try for example the following:

btrfs fi usage /mnt2/Backup/Sunshine/220123-1402

Or any one of the other subvolumes listed from your screen grab picture of the result of:

From that picture we see that inside the share (btrfs subvol) of Backup (destination of all backups) there are many other subvolumes. They all likely add up to the expected total is my guess. But your 221 MB is only the top subvol.

I suspect this is what is happening differently from what you expect.

For example are “Sunshine” and “touchepo” different systems. In which case we see UrBackup is creating potentially sub sub subvolumes.

  • Backup (subvol) share in Rockstor speak.
    – Backup in turn has subvols per machine created by UrBackup.
    — each machine subvol has subvols for each incremental/full backup.

type think. Also note that snapshots and what Rockstor calls “clones” are also just subvols.

See what those sub/subvol usage reports give you size wise. I think we are getting closer on this one hopefully. And our replication system, as per btrfs send/receive does not transit these subvolumes. Also note that subvolumes can be mounted anywhere. They are often nested within their parent but do not have to be. Such as, for example, our own share mounts we mount parallel with their parent.

/mn2/share-name-mount-point(parent of pool above)

Rockstor itself only considers (mostly) the top level subvols as shares and will ignore others. And btrfs send/receive does not, from memory, transit over subvol boundaries.

The answer here may be to use something more configurable for your btrfs backup of a nested subvolume data set such as UrBackup looks to be generating.

brtbk -

I don’t know but it may have the ability to recursively send nested subvolumes, which is what I’m assuming you re after. I.e. to transfer UrBackups target subvol to another system.

Sorry to not be more helpful here but I think the output of the above command, and other subvols created in the share Backup will hopefully shed light on what is happening here.

Btrfs has some magic about it and some things look just like regular file systems but they are not. There are, in a way, invisible boundaries between what might normally be thought of as the same filesystem, i.e. assuming a subdirectory is on the same filesystem is not correct. And in deed we see this on regular filesystems by way of mount points. All of Rockstor’s data pools and their associated shares (top level btrfs subvols) are actually mounted on a directly called /mnt2; but if one was doing a backup of / they would not be included as there is nothing in /mnt2 on the root pool. It’s just an empty directory.

Hope that helps. Incidentally what is your primary language? There may be another native speaker of that language here on the forum? I’m afraid my natural language skills are very much limited to English, along with a smattering of French and European Portuguese.


Yes you are right, Urbackup will make subvolume for each snapshot (Sunshine and touche pas are 2 computer for testing purposes)
And if i btrfs fi usage /mnt2/Backup/* they all add up as they should.

Is there any way to force a wildcard in the Rockstor send/receive ? so i can send everything under /mnt2/Backup ?
brtbk seems interesting, but this just add another layer of complexity and will render the Rockstor UI a little useless ?

Im a French Canadian by the way ^^

@Jaune Hello again.

OK, so that’s reassuring then.

No, we don’t do recursive subvols and I believe that send/receive likewise doesn’t either given a subvol is considered as a filesystem barrier/edge of sorts.

That’s a bit harsh :slight_smile: . Our purpose is more than a send/receive replication wrapper.

Cheers. We have at least one native French speaker here on the forum but I don’t see any language barrier to date in what I’m hoping is passing for reasonable English.

Take a look around at other send/receive wrappers. It may be there are others that can do what you need. Or there may be another backup Rock-on such as the duplicati one that will happily ignore subvols. But it won’t benefit from the send/receive potential bandwidth save. But that may be a work around for now. I.e. UrBackup to interface with the clients and duplicati to dump the result elsewhere.

Hope that helps, and I’m glad we got to the bottom on this.

One final thought, maybe you can configure UrBackup to not make / use subvols. It may then dump the lot in a single subvol (the share it was given) and you are then sorted. Not sure of downsides as not familiar with UrBackup. And I’m assuming your clients arn’t btrfs here and that UrBackup is simply using this for it’s own organisational purposes as it can.


No what i mean is brtbk is console only, so THIS make the send/receive UI useless :stuck_out_tongue: not all the Rockstor UI sorry my bad on this one.

I have asked on the UrBackup forum and i am waiting an answer, will post the resul as soon as i have an answer.

Also, i am aware that there is other option, but we use UrBackup at ours office with great success (on a windows server) and we dont want to replace the existing solution. What we are trying to avoid is sendind all the data at 2 place when you can just send the incremental. and since it’s block level (vs rsync which will have to go throught all folder and file) the only way with windows is to use data replication. But you need an Domain to do that. Also UrBackup work best with Btrfs so ! i will make it work with your awsome project. It’s only a matter of time !
Keep on the good work.