Does Rockstor support HM-SMR drives?

Hi, Rockstor newbie here! Like topic says, does it support zoned devices, HM-SMR?
Just installed Leap 15.6 on bare metal (Intel x299 with its internal SATA controller). Machine had previously Ubuntu 22.04 on it, where I successfully formatted my WD HC620 14TB HDD, so this controller passes HM commands.

But in Rockstor, when I enter in command line:
mkfs.btrfs -O zoned -d single -m single /dev/sda
I get:
ERROR: unrecognized filesystem feature ‘zoned’

In WebUI, when entering Storage → Disks
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 40, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 417, in post
return self._update_disk_state()
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib64/python3.11/contextlib.py”, line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File “/opt/rockstor/src/rockstor/storageadmin/views/disk.py”, line 212, in _update_disk_state
pool_info = dev_pool_info[dev_name]
~~~~~~~~~~~~~^^^^^^^^^^
KeyError: ‘/dev/sda’

I have ‘btrfsprogs-6.5.1-150600.2.4.x86_64’ installed, which is newest ?
Kernel is Linux server 6.4.0-150600.23.73-default

lsblk --zoned says:
localhost:~ # lsblk --zoned
NAME ZONED ZONE-SZ ZONE-NR ZONE-AMAX ZONE-OMAX ZONE-APP ZONE-WGRAN
sda host-managed 256M 52156 0 128 672K 4K

@suur13 welcome to the Rockstor community.
The kernel should support the zoned mode. However, when checking the lists of available features, it seems that it’s not available, which is probably due to the version of btrfs-progs that comes with this. On one of my VMs with Leap 15.6 it is not offered:

btrfs-progs v6.5.1
 # mkfs.btrfs -O list-all
Filesystem features available:
mixed-bg            - mixed data and metadata block groups (compat=2.6.37, safe=2.6.37)
quota               - quota support (qgroups) (compat=3.4)
extref              - increased hardlink limit per file to 65536 (compat=3.7, safe=3.12, default=3.12)
raid56              - raid56 extended format (compat=3.9)
skinny-metadata     - reduced-size metadata extent refs (compat=3.10, safe=3.18, default=3.18)
no-holes            - no explicit hole extents for files (compat=3.14, safe=4.0, default=5.15)
free-space-tree     - free space tree (space_cache=v2) (compat=4.5, safe=4.9, default=5.15)
raid1c34            - RAID1 with 3 or 4 copies (compat=5.5)
block-group-tree    - block group tree to reduce mount time (compat=6.1)

Whereas on Ubuntu (25.04 in my case) the option appears:

btrfs-progs v6.12
# mkfs.btrfs -O list-all
Filesystem features available:
mixed-bg            - mixed data and metadata block groups (compat=2.6.37, safe=2.6.37)
quota               - hierarchical quota group support (qgroups) (compat=3.4)
extref              - increased hardlink limit per file to 65536 (compat=3.7, safe=3.12, default=3.12)
raid56              - raid56 extended format (compat=3.9)
skinny-metadata     - reduced-size metadata extent refs (compat=3.10, safe=3.18, default=3.18)
no-holes            - no explicit hole extents for files (compat=3.14, safe=4.0, default=5.15)
fst                 - free-space-tree alias
free-space-tree     - free space tree, improved space tracking (space_cache=v2) (compat=4.5, safe=4.9, default=5.15)
raid1c34            - RAID1 with 3 or 4 copies (compat=5.5)
zoned               - support zoned (SMR/ZBC/ZNS) devices (compat=5.12)
bgt                 - block-group-tree alias
block-group-tree    - block group tree, more efficient block group tracking to reduce mount time (compat=6.1)
squota              - squota support (simple accounting qgroups) (compat=6.7)

You can tinker with getting to a more recent version of btrfs-progs on there, or, since you’re experimenting already, you could also install either the Tumbleweed or slowroll version of Rockstor whose btrfs-progs version support the zoned mode (I quickly checked there, they’re using btrfs-progs v6.14).

I don’t think it will work using the WebUI out of the box, meaning creating the RAID versions and have the disk formatted etc.
However, I suspect that if/when you have successfully created the filesystem/RAID setup, using the command line you might be able to import it via the WebUI. Whether it will then behave like any other btrfs device Rockstor currently handles … I don’t think anyone has tested this scenario before.

So, please, if you do continue, report back so we can get some additional experiences in that space.

2 Likes

Thanks for the hint. Installed slowroll and it worked ! Not even command line was not needed. Once I went to WebUI, it recognized the disk and imported my previous pool.
I do not have RAID to play around, just a single disk.
Before moving on, can you tell please, can I install qemu, kvm and libvirt in slowroll and create vm’s ? Reason I went with Leap initially was possibility to install those.

Strange thing - I lost root password (sure I did not typed it wrong 3 times - 2 in creation and once after setup. But after first restart root pass do not work anymore). Right now it is easier to reinstall, than start suse recovery option. I can now also choose either slowroll or tumbleweed, based on preferences regarding VM’s.

Thanks again.

3 Likes

Awesome that @Hooverdan could get you up and running and great to know the webUI does not fall on its face too much anymore.

You definitely can run KVM, etc on Slowroll; there are zypper patterns for that. I’ll get that to you later today unless someone else beats me to me.

With regards to your password issue: are you failing to login when using SSH, or when login at the machine’s console itself?
I’m asking because wonder if you’re hitting the fact that root login by password is disabled by default on Slowroll/TW (openSUSE default). Have you enrolled an SSH key while installing Rockstor as detailed in our docs?
https://rockstor.com/docs/installation/installer-howto.html#ssh-key-enrollment-tw-sr

No worries if you haven’t, you can still do it now.

3 Likes

Thanks. It was both - ssh and console. Not sure what happened during install. But Rockstore creates root pass always (Leap, TW, sr) during install, right ?
Anyway, does not matter. I had identical SSD, so I installed second copy, chroot to first, resetted pass. Now I have 2 identical slowroll installs :slight_smile:
Then I tought I can make raid 1 in Pools UI, but it won’t let make raid for root ?

Actually I meant cockpit which required Leap acc. to this tutorial:

2 Likes