New Hard drives for data transfer from Thecus

I am new on Rockstor.

I have installed Rockstor on an ASUS Z77 Premium based tower. I have 2 new Seagate Ironwolf Pro 16TB to install as RAID 1 btrfs. I think I should run tests to ensure they are reliable. What is the way to test?

The next tasks are to move data across from a Thecus N8800+. I have 14TB of data. I assume I use rsync? Any advice please?

I will then use a windows 11 install, connect and sort through the data and move some of it around to other drives. I think I should run maintenance processes to keep data integrity. What would you suggest?

Thanks

@bootit welcome to the Rockstor community.

Other than running SMART tests (maybe the long one) I don’t think there is much additional testing you need to do. I’ve found the Ironwolf pretty reliable (running 4 for the last year without any issues). There is probably not a one-size fits all on how would do reliability tests. The RAID 1 will be part of your “insurance”. Though, as usual, you still want to have a backup plan that does not involve these two drives, since the gospel is that RAID is not a backup :smile: as you will find on this forum and elsewhere.

If you’re familiar with rsync then that’s what I would use. Once running it’s pretty hands-off until your files are on the Rockstor box. But, there are other options, requiring some additional installation such as syncthing or the likes. Earlier this year, I came across this article, claiming superior speed and ease of use, but I do not have personal experience with it.

As for data integrity. You occasionally want to run a scrub job, especially if you’ve moved/changed a lot of data after your transfer. Since you’re using RAID1, I don’t think balancing will do anything for you (and it’s not really for data integrity).

re:

assuming, these files will go to either local folders or other systems in your network? For moving files locally, doing it through windows is fine. However, if you’re moving to other systems, and depending on how big/numerous the files you will be moving off the box are, it can be faster if you go onto the Rockstor instance itself and trigger the moves to other system from there, taking windows out of the middle (which will make things slower).
But, that might also not be practical because you would have to create samba connections to the other environments from within Rockstor (not possbile via the WebUI currently), and then use the command line to perform the move, etc. - though there is the File Manager Rockon that could help facilitate the process. But again, if timing is not that big of a concern rather than a convenient way of sorting through your files, windows will get the job done.

3 Likes

Thanks for the advice. I created the first pool.
Btrfs supports flushoncommit, but Rockstor’s UI doesn’t whitelist it. Is this a problem or am I overthinking how much I need to safeguard my critical data.
Current
raid1 rw,noatime,space_cache=v2,commit=5,subvolid=5,subvol=/ **[no]

Mounting the top-level subvolume (subvolid=5), which gives you full control over the pool I inserted the command and received this back
rockstor-nas:~ # lsof +D /mnt2/data-1
rockstor-nas:~ # umount /mnt2/data-1
rockstor-nas:~ # mount -o rw,noatime,flushoncommit,commit=5,space_cache=v2 /dev/sdd /mnt2/data-1
rockstor-nas:~ # mount | grep data-1
/dev/sdd on /mnt2/data-1 type btrfs (rw,noatime,flushoncommit,space_cache=v2,commit=5,subvolid=5,subvol=/)

I made it persistent

rockstor-nas:~ # nano /etc/fstab

GNU nano 7.2 /etc/fstab
LABEL=SWAP swap swap defaults 0 0
LABEL=ROOT / btrfs noatime 0 1
LABEL=ROOT /.snapshots btrfs noatime,subvol=@/.snapshots 0 0
LABEL=ROOT /home btrfs noatime,subvol=@/home 0 0
LABEL=ROOT /opt btrfs noatime,subvol=@/opt 0 0
LABEL=ROOT /root btrfs noatime,subvol=@/root 0 0
LABEL=ROOT /srv btrfs noatime,subvol=@/srv 0 0
LABEL=ROOT /tmp btrfs noatime,subvol=@/tmp 0 0
LABEL=ROOT /var btrfs noatime,subvol=@/var 0 0
LABEL=EFI /boot/efi vfat defaults 0 0
LABEL=ROOT /usr/local btrfs noatime,subvol=@/usr/local 0 0
LABEL=ROOT /boot/grub2/i386-pc btrfs noatime,subvol=@/boot/grub2/i386-pc 0 0
LABEL=ROOT /boot/grub2/x86_64-efi btrfs noatime,subvol=@/boot/grub2/x86_64-efi 0 0

added at the bottom
/dev/sdd /mnt2/data-1 btrfs rw,noatime,flushoncommit,commit=5,space_cache=v2 0 0

ctrl o then enter to save

mount -a

rockstor-nas:~ #
no errors appear, your entry is valid and will apply on reboot

while labels and UUIDs are generally stable, there are edge cases where they can change:

  • Labels can be overwritten if you reformat or relabel a device.
  • UUIDs can change if the filesystem is recreated or cloned.
  • Device paths like /dev/sdd can shift if hardware is added or reordered.

sticking with /dev/sdd gives you full visibility and control. It’s explicit, predictable, and easy to trace—especially in a home lab where you know the hardware layout.

The whitelist (I assume you mean the tooltip that’s displayed) is not necessarily up to date. I think that entry has not been changed in quite some time (since 2018 in fact).
So, if the btrfs version on the system supports the flushoncommit mounting option (or other more recent ones) and you want to use it, then you can enter it there. This way, the option is reflected in the Rockstor db (used for managing pools) and visible in the WebUI. Personally, I have not used this option and not seen any issues since it became available, but that doesn’t mean anything.

As you have probably noticed prior to your actions, Rockstor does not manage btrfs via /etc/fstab for various reasons. Take a look at this wiki entry, while it is a few years old, the underlying approach to how Rockstor manages devices has not changed:

1 Like

A new option Flushoncommit chould now be made available as a feature in the pool options and refelcted in Disks documentation

Is it worth It?

Yes, in some use case. It forces metadata and data to disk on every commit, reducing the risk of corruption during power loss or crashes. You’re trading a bit of performance for peace of mind—and with your throughput already optimized, it’s a smart trade.

I wondered How Rockstor bypasses /etc/fstab

Rockstor manages Btrfs pools and shares through its own internal database and systemd mount units , not through /etc/fstab . When you create or import a pool via the WebUI, Rockstor:

  • Registers the pool and its subvolumes in its PostgreSQL-backed database
  • Generates systemd mount units dynamically to mount those volumes at boot
  • Ignores /etc/fstab for pool management to avoid conflicts or duplication

This gives Rockstor flexibility to manage pools, snapshots, and shares independently of traditional Linux boot-time mounts.

So why did the /etc/fstab entry work?

Because it mounted a top-level subvolume manually , outside of Rockstor’s pool management scope. That means:

  • Rockstor doesn’t “own” /mnt2/data-1 in its database
  • /etc/fstab entry is respected by the system at boot
  • There’s no conflict , because Rockstor isn’t trying to mount or manage that path

It effectively created a parallel, explicitly controlled mount that coexists with Rockstor’s automation. It is “layering personalised logic” on top of it.

Why this is a helpful move

  • You get full control over mount options like flushoncommit , which Rockstor’s UI doesn’t expose
  • You maintain predictable, auditable behavior on reboot
  • You avoid surprises from UI updates or database-driven changes

About Thecus Appliances, I have been able to use CLI to push and pull data from Rockstor using rsync.

The Thecus NAS also can use its appliance GUI to schedule backup jobs to Rockstor. I will post about how I did this later. This requires a work around!

Using CLI, I used this example command:
NB Helpful reference: thecus-nas-rsync-backup-guide

nohup rsync -av --partial --inplace --log-file=/mnt2/main/00Rockstor_logs/thecus_data1BU.log --password-file=/root/.rsync_pass --exclude=‘/mnt2/main/00Rockstor_logs/’ Backup@192.168.1.230::data-1BU_Online /mnt2/main/incoming/data-1BU_Online/ > /dev/null 2>&1 &

Segment Meaning
nohup Keeps the job alive even if you log out or close the terminal.
rsync The file transfer utility.
-a Archive mode: preserves file metadata.
-v Verbose output (though suppressed here).
-z Compresses data during transfer.
Why -z Is Hurting Performance
Compression is CPU-intensive, and Thecus is already maxed out. Caused disk throughput to drop from 120MB to 20MB.
-z adds overhead without meaningful gain—especially over a local gigabit LAN, where bandwidth isn’t the bottleneck
–partial Keeps partially transferred files for resumption.
–inplace Write Directly to Destination * Updates files in-place , rather than creating temp files * Avoids double disk I/O and fragmentation * Crucial when syncing to large files or limited disk space Ensures timestamp and inode consistency helpful for long jobs on Thecus. When used together: * --partial ensures interrupted files aren’t lost * --inplace ensures those partial files are resumed directly , not replaced
–log-file=
 Logs rsync activity to a file on Rockstor.
–password-file=
 Supplies password non-interactively (but only works if the file exists).
Backup @ 192.168.1.230::data-1BU_Online Connects to Thecus rsync daemon using the Backup user and module.
/mnt2/main/incoming/data-1BU_Online/ Destination folder on Rockstor.
using > /dev/null 2>&1 Silences all output to the terminal.
& Runs the command in the background.
–exclude=‘/mnt2/main/00Rockstor_logs/’ just in case I copy the command and push data with logs to avoid circular data issues

For testing add
-n is a valid shorthand for --dry-run in rsync

2 Likes

I have create a new issue on the repository to consider an update to the advanced mount option list to catch up with btrfs ongoing development:

1 Like