Migrating from a Dead Freenas

Ive been using freenas for a couple of years and was investigating migrating to rockstor. Well, the Freenas server just died (could be PSU or MB, not sure and dont have spare parts)

What I want to do is install 12tb of new storage in the new RockStor server, then install the old ZFS drives/pools from the old Freenas server and migrate the data to the new storage. I know it would require installing the zfs drivers in Linux, Ive done this on a Centos 7 server before. Once the moved zfs drives are clean, wiping them and adding them to the Rockstor pool.

I guess the question is, can ZFS drivers be installed on the rockstor server to copy the data from the old freenas drives over to the new BTRFS rockstor pool?

Note: This is a personal server and not a company mission critical server.

1 Like

@clink, welcome to the rockstor community forums.

While I don’t have any experience on importing ZFS pools/drives onto another system, it might be easier to follow the OpenSUSE path, since you’re beginning from scratch with Rockstor.

My suggestion (and I am sure there are other opinions out there, @phillxnet) would be to install OpenSUSE on a boot drive first with btrfs on your new storage system (no Rockstor flavor), add the ZFS drivers (maybe something like this here: https://build.opensuse.org/package/show/filesystems/zfs), and copy the data over from ZFS into the btrfs file system. Once that is complete, create a clean installation of Rockstor (using the OpenSuse installation instructions, since there is no ISO available yet) on your boot drive, import your btrfs pool with your data (since Rockstor doesn’t do anything special to btrfs but manages the standard implementation of it), and then add the old ZFS drives (have to completely wipe them before though, so there won’t be any conflict) as additional storage. Or keep the ZFS drives until you feel safe that rockstor is working for you and then wipe/add them at a later point.
With the 2-step installation you would avoid having all kinds of additional dependencies installed to support the ZFS transfer, which you then wouldn’t need going forward and would unnecessarily bloat the Rockstor core (or worse, create conflicts down the line because of different pinned versions of packages, etc.).
Of course, as @GeoffA always says, “backup your data” (or have a backup plan) before you do anything :slight_smile:

4 Likes

‘Backup’ is my middle name. Either that, or I simply get everyone’s back up :slight_smile:

4 Likes

@clink What @Hooverdan and & @GeoffA said already but one little note:

Althought this is mostly true. Rockstor can only deal with a set arrangement of subvolumes. So I think it would be better to use, as @Hooverdan suggested, a scratch system to do the migration with, but to use a ‘scratch’ Rockstor system to at least create the pool (vol) and shares (subvols) for the new arrangement. Otherwise if you go with a plain Leap 15.2 system and do all your own vol/subvol creation you may well end up using a subvol arrangement that Rockstor does not understand: and would then not be able to import in a way that would work within the Web-UI. This may of course make adding the ZFS stuff a little more difficult but not by much. Rockstor is now pretty much a JeOS variant of Leap 15.2 with the barest minimum of repos and a few Rockstor specific rpms.

Once you are done with this scratch Rockstor install you can always do a fresh install and import your pool to avoid ‘carrying’ all the stuff that was required to host the ZFS capability.

And the obligatory link to our Rockstor 4 DIY installer build recipe:

Hope that helps.

3 Likes

@phillxnet thanks for clarifying, I honestly didn’t realize that there are exceptions around the subvolume management. Learned something new today!

2 Likes

There are two types of people in this world. Those who have lost data and those who will loose Data. :wink:

No worries, I have backups of the important stuff.

@Hooverdan Thank you for the info

@phillxnet Just to make sure I have the order of operations down.

  • deploy RockStor
  • set up new pool/volumes/subvolumes in Rockstor
  • install zfs drivers
  • install zfs drive pool and import/mount
  • copy/rsync data from ZFS to BTRFS
  • Validate data
  • remove zfs drives
  • reboot/reinstall clean BTRFS

After which I can DD from /dev/zero over the old zfs drives, reinstall them and use them to create or expand a pool.

Then I can worry about the 40G Infiband install. lol

Thank you to everyone for the help/input.

@clink Re:

If the following step:

is meant to read “clean Rockstor” :slight_smile: .

Again what @Hooverdan mentioned here. You may well be installing a lot of alien packages to the initial Rockstor install and we just don’t know how they will affect things. One example for instance may be that the ZFS tools may well install/depend-upon multipath tools: where 2 controllers are connected to the same drive. These tools are known to upset Rockstor, even if the hardware is not installed, as it can see multiple names for the same device and that currently worries it to distraction :). So yes, you may well find that mid the ZFS dependencies being installed the Web-UI gets a little confused. But at that stage you are on command line for the data transfer anyway, given your indicated plan.

There might also be an additional step right at the beginning, but that may be difficult now the original server hardware has died. I believe there is an ‘export’ step that is preferred on zfs pools. Hopefully this can to be skipped safely given your hardware predicament. I’m not that up on the ZFS side of things though so hopefully others can chip in on this one. On the btrfs side the equivalent is a clean unmount incidentally.

We have a section on using dban in our “Pre-Install Best Practice (PBP)” docs section for an alternative here:
http://rockstor.com/docs/pre-install-howto/pre-install-howto.html#
Wiping Disks (DBAN): http://rockstor.com/docs/pre-install-howto/pre-install-howto.html#wiping-disks-dban

And again, for all new installs, I would also strongly advise the Rockstor 4 route here as it uses a far newer btrfs than our now legacy CentOS based variant. At least then if you find any issues and are happy to report/engage with them we can potentially fix them.

And as with other Copy-On-Write (COW) filesystems, such as ZFS, btrfs is best kept below around 80% capacity for performance and durability reasons.

Hope that helps.

2 Likes

@Hooverdan Re:

Yes, it’s pretty much that the Web-UI places and expects to be in place a certain subvol arrangement. Mainly that shares/clones (btrfs subvols) are at the top level. But there are caveates there also, in particular on the system (OS) drive where we have to account for the openSUSE boot to snapshot arrangement; or not. Depending on if it is installed: it is by default enabled for a Rockstor 4 DIY installer based install. I’m quite looking forward to improving our integration of that feature actually. But if one is using a custom install based on a Leap 15.2 that did not result from our installer then this feature may not be enabled and that intrinsically alters the subvol hierarchy of the system pool (labelled/named ROOT). Which reminds me, that is another expectation of Rockstor; it expects pools (btrfs vols) to be labeled with their names, though in some cases it can just enforce this. And another constraint that I know you are already aware of, but for the wider audience on this thread, all pools (btrfs vols) and shares/clones/snapshots (btrfs subvols) must have system-wide unique names. That is not a btrfs constraint but a Rockstor one. We may ‘fix’ this in time but it will not be for a while.

The reverse of the above, reading a btrfs ‘creation’ of Rockstors offers not surprises however as we, in the end, simply execute a bunch of btrfs commands ‘under the hood’. With zero customisation beyond those commands, i.e. we do nothing at a lower level and as of Rockstor 4 use only our upstream default kernel and btrfs progs.

Thanks for your input on this one. And it was only more recently that @Flox found that multipath incompatibility when he saw some very strange behaviour on a dev machine that he in turn tracked down to a multipath systemd enabled status that we had missed along the line, it’s now sorted in our OBS rockstor package that setup up such defaults.

Hope that helps.

3 Likes

Yes, that one is on me. Ive only had 1 cup of coffee this morning. lol

Actually, multipath tools are for when you have remote drives (Iscsi or SAN) and multiple paths to those drives. However, you are correct in that zfs installs the multi-path tools as a dependency.

If I remember correctly, that is a misconfigured multipathd service. When it is configured with the UUID’s of the drives you should only see a single drive from the OS level. But it has been a few years since I configured multipathing. There is a story in there about getting it to work in the second stage initial ram disk of linux so the system could not only boot from the multipath device but detect which data center it was in and if it was in the DR Data center it would boot from the DR copy of the san. lol

There is, however you can recover the pool even if it was not exported. Ive had to do that a couple of times.

This was one of the reasons I was looking at the migration before it failed. one of my pools is at 90% and causing issues. The existing system could not handle any more drives. The new one I ordered will hold more drives and I’ve ordered 4tb SAS drives as the primary to create the new BTRFS pool on. The old system had 3tb SATA drives.

Ok, what is the current status of 4? Alpha, Beta, Stable??

I may be giving Rockstor a workout. My current HOME setup is

Freenas (now dead) with two pools of 3x3tb drives Pool #1 is a raidz1 (Backups of home directories and persistent storage for the cloud) and Pool #2 is a ZFS stripe. (Media/video files)

Linux Nas Server with 1 pool of 6x200gb ssd drives (raidz1) connected via 40g infinaband to the below server.

Blade server with 4 blades, each with 24 cores and 128g of ram running CentOS7 with kvm-qemu VM’s whos boot/drive images are on the Liunx nas. There are a hand full of infrastruture VMs and the other VMs run docker in a swarm configuration (total 20 node docker swarm) With persistent storage on the freenas server. (Most of which is down due to freenas dying.)

2 Likes

@clink Re:

and:

Cheers and quite likely, assuming it is miss configured upon being enabled with no (default) configuration, which was when this issue was noticed.

so as of writing we are at “Stable Release Candidate 6”. So still releasing into testing first, with a placeholder of 4.0.4 in the Stable channel.

Nice. Do keep us posted (and pics), and remember to report your findings. Ideally in focused forum threads at least until we have a clean reproducer or understand the specific problem, where-up, if a bug, it goes to a GitHub issue and awaits the appropriate attention from either a prior contributor or anyone else who fancies chipping in.

We do have a significant amount of technical dept within the project (Python 2 etc) but we are actively working on this.

Hope that helps and good luck with the migration.

4 Likes

I wish I could add more, but I’m not technically savvy enough I’m afraid. However having read through your inventory of kit, I’d enjoy seeing some pictures of your stuff tbh.
Probably very wrong in so many ways, but NAS porn :sunglasses:

2 Likes

Ok, when the new server and drives arrive next week, Ill do photos.

1 Like

For what it’s worth, I’ve done a migration from FreeNAS to Rockstor. I just installed opensuse 15.2, installed rockstor on top (with the rpm package) and then also installed the ZFS drivers to import my data. I had no problems so far. Maybe something will happen along the way later shrug. But I plan to customize my install a bit anyway, so I’ll have to deal with that…

4 Likes

@Marenz Thank for the additional input on this one.
Re:

Yes, this can be viable, and as long as you make sure to do the various tweaks re repos, IPv6, AppArmor etc we detail here:

you should be OK. But now we have the installer recipe available, where these tweaks are pre-applied and the resulting system is minimised, it’s a more common base which helps with understanding the state of the system. Plus it’s far easier, quicker, and a lot smaller on the install side now we have our installer. And one can always customise that install and will then likely end up with something closer to the more common Rockstor install resulting from the installer.

Do keep us posted, in focused threads, of any issues though as our installer is trying to stick as closely as possible to the base JeOS Leap images.

Cheers.

3 Likes

Im in the process of migrating my data and have kept a list of the things I see that are missing, the little idiosyncrasies of the dashboard, and some things I would like to see.

Im more than happy to write up my impressions as well as my findings, just wondering if you would like them in a single post or broken up?

2 Likes