Suggested pools and shares for a pair of SSDs?

I have a pair of new 500-GB SSDs (identical models) in an old PC (Intel DH77EB with 24 GB RAM). SSD1 is running openSUSE Tumbleweed as a server and currently has no empty partitions, so the drive is not immediately available for use in a pool. I installed Rockstor on SSD2, once again without empty partitions. Presumably, I will replace Tumbleweed with Rockstor, and then use both drives for a home server to run Nextcloud and other self-hosted applications.

  1. For testing Rockstor with rock-ons on a single drive, should I [a] just create a share on the installed ROOT pool (not recommended for production) or [b] partition the drive and reinstall Rockstor, reserving space on another partition for pools/shares or [c] something else?
  2. Rockstor only uses a small portion of the disk. Assuming there is room on a single disk for everything I need, does option 1b make sense for production (leaving the other drive available for backups, data storage, or raid usage)?

Edit 2022-11-08: My own answers, based on current understanding.

  • Preferred members of a pool are whole unpartitioned disks, like the one I used for the Rockstor installation, suggesting that I should keep the default whole-disk ā€˜poolā€™ and create shares on it as needed.
  • On the other hand, using the system pool for the rock-ons root share is not recommended, suggesting that I should partition the drive instead.
  • Presumably, separating appliances from the operating system is more important than the convenience afforded by whole-disk pool members, so if I want to put the whole drive to use I guess the more acceptable compromise is to reinstall Rockstor on a smaller partition. Is this correct?

@rinomac Welcome to the Rockstor community forum.
You have pretty much, as you say, answered your own questions here, but Iā€™l like just to confirm:

Thatā€™s not something we support as it goes. Our system drive partition logic would likely get very confused (read would likely break). We do have far better partition awareness capability in our data drives, but alas this has not yet been adapted to our system drive. When it is, our code will be far cleaner as the data partition awareness stuff is much newer and more ā€˜properā€™. Where-as our system partition awareness was basically a hack early on in the project. But all in good time hopefully.

Yes, we try to keep things as small as possible, with the intention that a dedicated smaller device will hold only the system. But as you have realised we do have share creation capability on the system drive, we nearly didnā€™t as it goes! When we moved from our v3 CentOS days to our current ā€œBuilt on openSUSEā€ v4+ I seriously considered removing this capability. But itā€™s just too handy some times. And not all folks need a gazillion dive raid arrangement. Plus things like:

ā€œThe Rock-ons rootā€: Rock-ons (Docker Plugins) ā€” Rockstor documentation

are a prime candidate for the system drive, even if itā€™s quite plainly non-optimal. We are trying to be flexible/pragmatic while still encouraging folks to separate system and data. Plus bare-metal recovery is way easier if there is no use of the system drive other than what the installer puts there. We can, for example restore from our config saves the all the Rock-ons if the rock-ons-root is not on the system drive (and one imports the pool first of course). Thanks again to @Flox for this rather magic feature. But if anything is on the system dive itā€™s basically toast if that single drive dies and we have, as yet, not capability to btrfs-raid up that drive. But again, in time, once we have back-ported our partition awareness to the system drive and replaced itā€™s hacky approach, we should be able to offer some limited raid capability there.

Correct. It cuts out many decades of fake standards and layers and brings us down to a single manager up from the metal btrfs (and its implied kernel block layer of course). Thatā€™s a major move forward if youā€™ve ever looked really closely at partitions across the ages :slight_smile:.

Agreed, but again, it is super handy and really quite OK in may settings, as long as folks appreciate the added complexity of re-constructing the system or moving it from one system disk to another, plus some tiny systems can really benefit from the likely extra space on the system ssd for again things like the rock-ons-root. That way if they have ssd for system and hdd for data it can help with keeping the hdds asleep for longer periods of time.

Not really, as itā€™s just not ideal at all to do that. Far better to stick to whole disk. Plus we only support a single btrfs partition per device. So you are then strictly limited in your btrfs use which kind of defeats the purpose of using Rockstor in the first place.

Re the ā€˜homeā€™ share, that was another candidate to not ā€˜surfaceā€™ within the Web-UI. I was torn on this one when we moved from v3 to v4 as major version changes is really the only time we should do such things. But some folks may have already have depended upon this setup and we were super keen to have the transition as smooth as possible. We may yet remove the ā€˜homeā€™ surfacing and likely put it behind some advanced option or the like, to further reinforce a cleaner separation. But again all in good time and we have quite enough on as it is. But in the long run I expect us to remove the ā€˜homeā€™ access by default (but pop it in a config section we donā€™t yet have).

Yes, and No. Yes too ā€œā€¦ so if I want to put the whole drive to use I guess the more acceptable compromise is to reinstall Rockstor on a smaller [drive not partition].ā€

We just donā€™t do any partition arrangement on the system dive other than what the installer sets up. There are likely many workarounds. But that again defeats the purpose of an appliance like distribution such as Rockstor. Plus one can likely easily source a small fast ssd or the like for the ideal dedicated system drive and gain all the advantages. Also we have in our:

ā€œMinimum system requirementsā€: Quick start ā€” Rockstor documentation

16 GB drive dedicated to the Operating System (32 GB+ SSD recommended, 5 TB+ ignored by installer). See USB advisory.

Which would likely still have loads of space left as it goes.

Do let us know if you have any improvements re our docs on this. Itā€™s something we continually work on, or consider doing a pull request on them if that takes your fancy:

and we have a guide on contributing to the docs, in our docs:

ā€œContributing to Rockstor documentationā€: Contributing to Rockstor documentation ā€” Rockstor documentation

Hope that helps. And well done for trawling through the docs already and sharing your interpretations. We try to make them to-the-point: but there are a lot of intersecting points!

2 Likes

Thanks @phillxnet. I appreciate your pragmatism about the design. Allowing share creation on the system disk adds helpful options for repurposing old hardware and otherwise making do.

What I have at the moment is two 6-Gb/s SATA inputs and two 3-Gb/s inputs, and, of course, the faster inputs are occupied by the pair of SSDs. However, considering your response I took another look at the motherboard and rediscovered a forgotten 6-Gb/s mSATA+PCIE connection that I could use to add a system disk at a cost commensurate with this long-in-the-thooth machine.

My conclusion is that if the system and other data must coexist on the same disk, then I should stick with the default installation (using the whole disk for Rockstor).

I am more interested in server functions than mass data storage, so if even if I donā€™t add a system drive, I can use SSD1 for system and rockons, SSD2 as a copy of SSD1, and other means (HDDs, external drives, online, etc) for backups.

Do let us know if you have any improvements re our docs

Probably all noob oversations, but here are some things I noticed:

  • I was confused about the location of the home share, since it was not clear from the GUI if this meant /home or /home/username or something else. I did understand the caution against creating a rockons-root in home, so I experimented by making a couple of new shares on ROOT and then searched for and located them in the /mnt2 directory, where I also found the home directory in question.
  • Shares were created on ROOT by linux user root when I had logged in via web UI as username, so I wondered if there should be a rockstor user (analagous to the postgres user) for security reasons. Specifically, maybe Rockstor was configured to work differently in ROOT compared to doing the same tasks in the recommended pooling configuration.
  • I created two shares in addition to the existing home share. The second is editable and deletable, but the first is not, and I donā€™t know why.
3 Likes

I said this

ā€œI created two shares ā€¦ The second is ā€¦ deletable, but the first is not, and I donā€™t know why.ā€

Then I found this in the docs on deleting a share

ā€œif a share is exported to remote clients via sharing protocols, or has snapshots, it cannot be deletedā€

In my case, the first share must have had already had a snapshotā€”presumably, as a result of creating the second share.

@rinomac Hello again.
Re:

Creating one share has no bearing on any other, unless itā€™s a clone. So not quite sure whatā€™s going on there actually. Rock-ons work via snapshots as we use dockerā€™s btrfs back-end to manage the ā€˜layersā€™.

Did you use one of these shares as a Rock-ons-root. Also what was the message you got when trying to delete this share.

On first reading this, it looked like a situation that can arise when one creates a subvolume on the system drive that is then inherrited via a rockstor re-install or complete reset (i.e. a db wipe and re-do, likely from a forced delete and rpm re-install or the like, or via an .initrock file wipe and full rockstor service re-start. In these cases, to remove this subvolume, it must be done via comand line. Apologies for the lack of clarity here, just quickly nipping in with some possible explanation. However you donā€™t mention doing a manual reset or rpm uninstall-re-install or the like. A similar situation can arrise when users are created on the system before the install. The db has no record of them being ā€˜managedā€™ by rockstor and so they are considered as belonging to the system, and not created by the user as it has no record of them having been created. We may well have a bug or two here concerning subvols on the root dive, during the initial install. If you can find and report a reproducer for this, i.e. ā€œif one does this, the resulting subvol is not subject to normal Web-UI managementā€ then we can create an issue if it is considered a bug and then be able to prove itā€™s fix in the future.

But Iā€™ve definitely had something similar when doing source builds on-top of prior installs and have yet to document the exact procedure as itā€™s a little corner case for most users deploying via our installer.

You have to be very careful with this kind of approach. Bear in mind that almost the entire root is under a snapshot type arrangement itself. I.e. the boot to snapshot stuff we inherit from our openSUSE base:

https://doc.opensuse.org/documentation/leap/archive/15.0/reference/html/book.opensuse.reference/cha.snapper.html#sec.snapper.snapshot-boot

The ROOT pool is a little more complex than the data pools we create. We also hide a lot of that complexity. So do take care with manual subvol creation at the command line, if that is what you did. It is all a little stranger than it appears. We in fact mount the ROOT pool and itā€™s ā€˜homeā€™ subvol to be able to manage elements of the pool as a hole, but the actual ā€œ/ā€ mount point is, itself, a subvolume, but a default one I think (from memory). There is a fractal nature to btrfs that makes management, especially of our boot to snap system drive, more complex than a regular linux root. And you are free to do what-ever at the command line: but that does not mean that Rockstor will understand or even not be confused by it. Btrfs and itā€™s capabilities are far beyond what we present in our current Web-UI and to keep things more approachable we dumb-down some of it. Further reason to steer clear of using the ROOT pool or the system drive (very different levels). Also note that we mount everything in /mnt2 both the parent pool and all subvols of that pool. Hence you seeing /home there. ā€œhomeā€ is actually a subvol on the ROOT pool (top level I believe as itā€™s not rolled back with boot to snapshot, akin to /opt and /var so us and our db are also not rolled back). So the ā€˜nativeā€™ path of home (found in fstab) is actually @/home !! :

UUID=c031a57d-2ae5-466e-b592-a760ff2958ca /home btrfs defaults,subvol=@/home 0 0

So do take great care on the ROOT pool, itā€™s way more complex than it appears. And our requirement to manage it over-all means we actually mount outside of the ā€œ/ā€, above all the snapshots that represent rollback snapshots, but then ignore them (at least currently). We have to do this or we would be un-able to access pool level things. So at least initially, you would be better to learn from the data pools regarding the subvols etc as things are far simpler there. No subvol inside subvol. Just a top Pool (btrfs volume) with one level deep shares (btrfs subvols). But the ROOT Pool (btrfs volume) has all that boot to subvol complexity and is mixed into yet another drive layer where it is only a partition on that drive etc. So yes, look first to what is done on a data drive to help understand how we have to fit ā€˜insideā€™ and ā€˜outsideā€™ the boot the snapshot arrangement. That particular complexity was quite the migration effort when we moved form the differently arranged CentOS re btrfs use where there was no boot to snapshot arrangement.

I.e. take a look here at a typical ROOT subvol arrangement:

rleap15-4:~ # btrfs subvol list /mnt2/ROOT
ID 257 gen 19 top level 5 path @
ID 258 gen 1372 top level 257 path .snapshots
ID 259 gen 1639 top level 258 path .snapshots/1/snapshot
ID 260 gen 1635 top level 257 path home
ID 261 gen 1540 top level 257 path opt
ID 262 gen 1639 top level 257 path root
ID 263 gen 106 top level 257 path srv
ID 264 gen 1649 top level 257 path tmp
ID 265 gen 1650 top level 257 path var
ID 266 gen 1357 top level 257 path usr/local
ID 267 gen 24 top level 257 path boot/grub2/i386-pc
ID 268 gen 20 top level 257 path boot/grub2/x86_64-efi
ID 270 gen 33 top level 258 path .snapshots/2/snapshot
ID 297 gen 466 top level 258 path .snapshots/27/snapshot
ID 298 gen 467 top level 258 path .snapshots/28/snapshot
ID 299 gen 626 top level 258 path .snapshots/29/snapshot
ID 300 gen 627 top level 258 path .snapshots/30/snapshot
ID 301 gen 735 top level 258 path .snapshots/31/snapshot
ID 302 gen 737 top level 258 path .snapshots/32/snapshot
ID 303 gen 1153 top level 258 path .snapshots/33/snapshot
ID 304 gen 1155 top level 258 path .snapshots/34/snapshot
ID 305 gen 1166 top level 258 path .snapshots/35/snapshot
ID 306 gen 1167 top level 258 path .snapshots/36/snapshot

and the active mount points within this ROOT pools subvol nest:

rleap15-4:~ # mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=4096k,nr_inodes=1048576,mode=755,inode64)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,size=595792k,nr_inodes=819200,mode=755,inode64)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,size=4096k,nr_inodes=1024,mode=755,inode64)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/misc type cgroup (rw,nosuid,nodev,noexec,relatime,misc)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
/dev/sda3 on / type btrfs (rw,relatime,space_cache,subvolid=259,subvol=/@/.snapshots/1/snapshot)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=34,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=12771)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
/dev/sda3 on /.snapshots type btrfs (rw,relatime,space_cache,subvolid=258,subvol=/@/.snapshots)
/dev/sda3 on /boot/grub2/i386-pc type btrfs (rw,relatime,space_cache,subvolid=267,subvol=/@/boot/grub2/i386-pc)
/dev/sda3 on /root type btrfs (rw,relatime,space_cache,subvolid=262,subvol=/@/root)
/dev/sda3 on /srv type btrfs (rw,relatime,space_cache,subvolid=263,subvol=/@/srv)
/dev/sda3 on /usr/local type btrfs (rw,relatime,space_cache,subvolid=266,subvol=/@/usr/local)
/dev/sda3 on /boot/grub2/x86_64-efi type btrfs (rw,relatime,space_cache,subvolid=268,subvol=/@/boot/grub2/x86_64-efi)
/dev/sda3 on /home type btrfs (rw,relatime,space_cache,subvolid=260,subvol=/@/home)
/dev/sda3 on /var type btrfs (rw,relatime,space_cache,subvolid=265,subvol=/@/var)
/dev/sda3 on /opt type btrfs (rw,relatime,space_cache,subvolid=261,subvol=/@/opt)
/dev/sda3 on /tmp type btrfs (rw,relatime,space_cache,subvolid=264,subvol=/@/tmp)
/dev/sda2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
/dev/sdb on /mnt2/first-5-14-pool type btrfs (rw,relatime,space_cache,subvolid=5,subvol=/)
/dev/sda3 on /mnt2/ROOT type btrfs (rw,relatime,space_cache,subvolid=257,subvol=/@)
/dev/sdb on /mnt2/rock-ons-root type btrfs (rw,relatime,space_cache,subvolid=257,subvol=/rock-ons-root)
/dev/sdb on /mnt2/smokeping-config type btrfs (rw,relatime,space_cache,subvolid=258,subvol=/smokeping-config)
/dev/sdb on /mnt2/smokeping-data type btrfs (rw,relatime,space_cache,subvolid=259,subvol=/smokeping-data)
/dev/sda3 on /mnt2/home type btrfs (rw,relatime,space_cache,subvolid=260,subvol=/@/home)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=297892k,nr_inodes=74473,mode=700,inode64)

So as you see ā€œ/ā€ (in our current boot to a snapshot) is actually on subvol id 259:

mount | grep " / "
/dev/sda3 on / type btrfs (rw,relatime,space_cache,subvolid=259,subvol=/@/.snapshots/1/snapshot)

I.e. itā€™s not the top level. Also note the ā€œdefaultā€ pivot capability of btrfs:
From: https://btrfs.readthedocs.io/en/latest/btrfs-subvolume.html#subvolume-and-snapshot
we have:

A freshly created filesystem is also a subvolume, called top-level, internally has an id 5. This subvolume cannot be removed or replaced by another subvolume. This is also the subvolume that will be mounted by default, unless the default subvolume has been changed (see subcommand set-default).

Strange right. I personally avoid referencing the top level subvol as such, I think of and reference it as a volume, but btrfs is turtles all the way down, hence the fractal reference. But ultimately the boot to snapshot uses this snapshot, within snapshot capability and alters the default subvol to manage a boot arrangement. We then have to look, to an extent, as exactly where the system is to know if a subvol is relevant to us. All a little non-trivial. But far simpler (if still a little weird) on our data pools as we manage them from the ground up.

And the ā€œhomeā€ subvol is mounted once by fstab and then again by us (in /mnt2 as mentioned above) for management purposes:

rleap15-4:~ # mount | grep "home"
/dev/sda3 on /home type btrfs (rw,relatime,space_cache,subvolid=260,subvol=/@/home)
/dev/sda3 on /mnt2/home type btrfs (rw,relatime,space_cache,subvolid=260,subvol=/@/home)

But because btrfs subvol id=260 in the above case is to be mounted by fstab at /home (normal to older linux folks) does not preclude itā€™s mount elsewhere also, such as we do at /mnt2/home to ā€˜surfaceā€™ it mainly for historical reasons. But note that btrfs subvolumes are also visible as directories within mounts of their parent Pools or parent subvols it can get quite strange. So we mount the parent pool at itā€™s top level to gain the ability set stuff. So we have the mount point of /mnt2/ROOT

mount | grep "ROOT"
/dev/sda3 on /mnt2/ROOT type btrfs (rw,relatime,space_cache,subvolid=257,subvol=/@)

hence we can also ā€˜seeā€™ the contents of itā€™s subvolumes within that mount point:

rleap15-4:~ # ls -la /mnt2/ROOT/
total 4
drwxr-xr-x 1 root root  74 Nov  9 08:09 .
drwxr-xr-x 1 root root 132 Nov 14 17:21 ..
drwxr-xr-x 1 root root  10 Nov  9 08:09 boot
drwxr-xr-x 1 root root  48 Nov 16 18:35 home
drwxr-xr-x 1 root root  24 Nov 14 16:23 opt
drwx------ 1 root root 100 Nov 16 17:21 root
drwx------ 1 root root  78 Nov 16 19:32 .snapshots
drwxr-xr-x 1 root root  12 Nov  9 08:09 srv
drwxrwxrwt 1 root root 718 Nov 19 11:27 tmp
drwxr-xr-x 1 root root  10 Nov  9 08:09 usr
drwxr-xr-x 1 root root 110 Nov 14 16:22 var

with the btrfs view of subvols being:

rleap15-4:~ # btrfs subvol list /mnt2/ROOT
ID 257 gen 1686 top level 5 path @
ID 258 gen 1372 top level 257 path .snapshots
ID 259 gen 1651 top level 258 path .snapshots/1/snapshot
ID 260 gen 1682 top level 257 path home
ID 261 gen 1540 top level 257 path opt
ID 262 gen 1651 top level 257 path root
ID 263 gen 106 top level 257 path srv
ID 264 gen 1686 top level 257 path tmp
ID 265 gen 1686 top level 257 path var
ID 266 gen 1651 top level 257 path usr/local
ID 267 gen 24 top level 257 path boot/grub2/i386-pc
ID 268 gen 20 top level 257 path boot/grub2/x86_64-efi
ID 270 gen 33 top level 258 path .snapshots/2/snapshot
ID 297 gen 466 top level 258 path .snapshots/27/snapshot
ID 298 gen 467 top level 258 path .snapshots/28/snapshot
ID 299 gen 626 top level 258 path .snapshots/29/snapshot
ID 300 gen 627 top level 258 path .snapshots/30/snapshot
ID 301 gen 735 top level 258 path .snapshots/31/snapshot
ID 302 gen 737 top level 258 path .snapshots/32/snapshot
ID 303 gen 1153 top level 258 path .snapshots/33/snapshot
ID 304 gen 1155 top level 258 path .snapshots/34/snapshot
ID 305 gen 1166 top level 258 path .snapshots/35/snapshot
ID 306 gen 1167 top level 258 path .snapshots/36/snapshot

And here we finally see the top level id5 ā€œ@ā€ in openSUSE.

And we also see our differently treated subvols of /opt, /root, /home, /var, etc.

Crazy right!

And also we see, within our current ā€œ/ā€ mounted snapshot of subvolid=259 (mount point by openSUSE as in above detail of ā€œ/@/.snapshots/1/snapshotā€

rleap15-4:~ # ls -la /mnt2/ROOT/.snapshots/1/snapshot/
total 8
drwxr-xr-x 1 root root  228 Nov 14 16:25 .
drwxr-xr-x 1 root root   32 Nov  9 08:10 ..
drwxr-xr-x 1 root root 1660 Nov 16 17:24 bin
drwxr-xr-x 1 root root  684 Nov  9 08:10 boot
-rw-r--r-- 1 root root  112 Nov  9 08:10 config.bootoptions
-rw-r--r-- 1 root root   71 Nov  9 08:09 config.partids
drwxr-xr-x 1 root root   52 Nov  9 08:09 dev
drwxr-xr-x 1 root root 3594 Nov 16 18:35 etc
drwxr-xr-x 1 root root    0 Nov  9 08:09 home
drwxr-xr-x 1 root root  100 Nov 14 16:22 lib
drwxr-xr-x 1 root root 2978 Nov 15 12:59 lib64
drwxr-xr-x 1 root root    0 Mar 15  2022 mnt
drwxr-xr-x 1 root root  132 Nov 14 17:21 mnt2
drwxr-xr-x 1 root root    0 Nov  9 08:09 opt
drwxr-xr-x 1 root root    0 Nov  9 08:08 proc
drwxr-xr-x 1 root root    0 Nov  9 08:09 root
drwxr-xr-x 1 root root    0 Nov  9 08:09 run
drwxr-xr-x 1 root root 2288 Nov 16 17:24 sbin
drwxr-xr-x 1 root root    0 Mar 15  2022 selinux
drwxr-xr-x 1 root root    0 Nov 19 11:26 .snapshots
drwxr-xr-x 1 root root    0 Nov  9 08:09 srv
drwxr-xr-x 1 root root    0 Nov  9 08:08 sys
drwxr-xr-x 1 root root    0 Nov  9 08:09 tmp
drwxr-xr-x 1 root root  110 Nov  9 08:09 usr
drwxr-xr-x 1 root root    0 Nov  9 08:09 var

which is actually our current ā€œ/ā€ mount (within itā€™s particular boot to snapshot instance):

rleap15-4:~ # ls -la /
total 12
drwxr-xr-x   1 root root  228 Nov 14 16:25 .
drwxr-xr-x   1 root root  228 Nov 14 16:25 ..
drwxr-xr-x   1 root root 1660 Nov 16 17:24 bin
drwxr-xr-x   1 root root  684 Nov  9 08:10 boot
-rw-r--r--   1 root root  112 Nov  9 08:10 config.bootoptions
-rw-r--r--   1 root root   71 Nov  9 08:09 config.partids
drwxr-xr-x  17 root root 3680 Nov 19 11:27 dev
drwxr-xr-x   1 root root 3594 Nov 16 18:35 etc
drwxr-xr-x   1 root root   48 Nov 16 18:35 home
drwxr-xr-x   1 root root  100 Nov 14 16:22 lib
drwxr-xr-x   1 root root 2978 Nov 15 12:59 lib64
drwxr-xr-x   1 root root    0 Mar 15  2022 mnt
drwxr-xr-x   1 root root  132 Nov 14 17:21 mnt2
drwxr-xr-x   1 root root   24 Nov 14 16:23 opt
dr-xr-xr-x 219 root root    0 Nov 19 11:26 proc
drwx------   1 root root  100 Nov 16 17:21 root
drwxr-xr-x  27 root root  800 Nov 19 11:27 run
drwxr-xr-x   1 root root 2288 Nov 16 17:24 sbin
drwxr-xr-x   1 root root    0 Mar 15  2022 selinux
drwx------   1 root root   78 Nov 16 19:32 .snapshots
drwxr-xr-x   1 root root   12 Nov  9 08:09 srv
dr-xr-xr-x  13 root root    0 Nov 19 11:26 sys
drwxrwxrwt   1 root root  718 Nov 19 11:27 tmp
drwxr-xr-x   1 root root  110 Nov  9 08:09 usr
drwxr-xr-x   1 root root  110 Nov 14 16:22 var

This may all vary a little from a generic installer created Rockstor instance as Iā€™ve used a generic JeOS openSUSE instance for these terminal copies, as that is what we use to build the packages themselves. We like to always ensure that we work on a generic openSUSE, even given a few repos and re-configs. But we should always mostly be ā€œnot confusedā€ by the arrangement our upstream works with. As stated before, this took quite the adaptation on our part. And again, look to the data pools for a far simpler arrangement as they have a few fewer layers.

So in short, take great care with any manual intervention on the system drive, itā€™s a little ā€˜nestedā€™ there ;). And data drives are complex enough.

Hope that helps and thanks again for the feedback.

2 Likes

I admire your dedication @phillxnet. Thanks!

Creating one share has no bearing on any other, unless itā€™s a clone. So not quite sure whatā€™s going on there actually.

Not a clone, and it was only an unconfirmed guess that creating a second share might have led to a snapshot creation (of the first share) as the other scenarios in the documentation did not apply. I have since checked and found that neither of the shares I created has a snapshot (in the GUI).

Did you use one of these shares as a Rock-ons-root.

No. I created the two shares via GUI and then did not use them (or the home share) for any purpose or touch them after creation.

There is only one disc in the pool for now.

The first new share was created from the Pools menu. I believe I created the second share from the Shares menu (Add Share button). Both options open the same dialog, I think. Considering the warning message, maybe neither share is meant to be deletable.

Also what was the message you got when trying to delete this share.

Only one of the two new shares included a trash icon in the GUI. Clicking it opens the Force Delete dialog, and I have not attempted to proceed further. Both shares have the same permissions (755), with root as the owner and group. If anything about this scenario is not what you expect and you would like me to investigate, please let me know.

So do take care with manual subvol creation at the command line

Yes, I was only trying to understand generally how the shares related to the system (browsing, not altering). Essentially, I wanted to know if creating shares on the ROOT pool or system drive would be any riskier or more troubleshome than, for instance, keeping my user home and operating system together on a peronal laptop. In other words, how important is the recommendation to separate shares from the system. Answer: important.

And the ā€œhomeā€ subvol is mounted once by fstab and then again by us (in /mnt2 as mentioned above)

In retrospect, I am not sure why I was confused about the home share and home directory. It should have been obvious that shares and directories are not the same.

3 Likes