Due to operator malfunctions, I managed to crash both of my rockstor machine and only recover both of them completely…
During the recovery process, the poor design of my storage pools has become apparent and I am now looking to switch my pools. I need to move the set of drives on my accer rockstor device to rockstorage device and the other way around.
My greatest concern is that my rockon shares need to move. I have more than enough space on my boot drive to move them there but I’m not sure 1. if that is the smartest move and 2. how to do it.
If there is documentation somewhere, i’d really appreciate being pointed in the right direction.
Might it be best to edit his thread so as to avoid suggestions that are only relevant if you had only one system working:
And given both systems are currently working, the simplest option may well be to just switch all drives, including the system drive, so that each system drive stays with it’s configured pool drive set. There are potential caveats here, ie on a re-install of one system it would pickup the appliance id of the motherboard it then found itself on: and for mechanisms such as replication, you would then be required to re-install the other machine so that it, in turn, picked up it’s appliance id from the former machines motherboard: so that could get tricky.
Just a though as this may suit your requirements. The main complexity as you surmised is the Rock-ons root share as we don’t currently transition rock-on configuration via the Configuration Backup and Restore mechanism; which necessitates that on a re-install the rock-on root share is best wiped and a fresh rock-on (docker) setup done, but this rock-on / docker re-setup can of course inherit you prior config and data shares so they should all pickup where they left off. Assuming no rock-on config or data shares were not the system drive that is.
It is generally not advised that you put any data, ie rock-on config or data, shares on the system drive. But it is less risky putting your rock-ons root share on the system drive as it contains nothing that can’t be re-established via rock-ons re-install. Where as your config and data rock-on shares, if placed on a system disk, will be subject to loss if you ever have to re-install.
Hope that answers at least part of your question. Also note that if you have a spare device you can always try a test config backup, download that config to your client machine. Disconnect your functional system disk, and re-install on the spare disk and attempt a config restore. If the machine you are re-installing on is already subscribed to the stable channel they you just have to re-enter the appropriate activation code issued for that machine (motherboard). This gives you the option to at least try a re-install, which ultimately is a good thing to be comfortable with anyway.
And always remember that there really is no substitute to disconnecting all data drives during a re-install as it is then impossible for a bug or human error there after to write to those drive. The Reinstalling Rockstor howto may be of interest here.
Hope that helps and apologies if I’ve wandered from the desired topic on this one.
I don’t think I was clear in my goal. I have machines 1 and 2 machine 1 has a fourteen terabyte pool and inferior hardware, less ram, older MB etc this houses all of my digital media. Machine 2 has a two terabyte pool 8 gigabytes of ram newer MB and this runs all of my rockons.
My goal is to eliminate Machine 1 in this house. I want to house my data and run my services from machine 2 which would mean switching the polls. understanding that this is possible I just want to make it as seamless as possible and try to avoid re-creating my rockons from scratch.
I hope this clears up my request for direction.
BTW I edited my original post and hopefully eliminated the confusion.
OK, I think that helps clear thing up. In which case you could try just move machine 1’s system disk and pool disks over to machine 2 (after you have disconnected it’s system disk and all of it’s prior pool disks. That way your current machine 1 configuration and fourteen terabyte pool (with data and rock-ons) will then be booted up on the newer hardware. That should be just a matter of disconnecting all drives from both machines and moving all prior machine 1 disks, including system disk, over to machine 2 (newer machine). You may have to configure the bios in machine 2 to boot from the freshly attached (prior machine 1) system disk but other than that things should just pick up from where they left off.
Be sure not to attach both system disks to the same machine at the same time, it will confuse Rockstor as if both system pools are labeled the same ie rockstor_rockstor, as is usual, it will get confused.
If you have any configuration, rock-ons etc, on machine 2’s two terabyte pool, it may just be easier to transfer that data via a client. But as long as the pools and any shares they have on them do not have name clashes, ie each pool has to have a unique name, and no share name clashes either (between the pools), then you could, disk ports allowing, attache the 2 terabyte pool to the new machine also. But to access it you would then have to import it via the disks page, see: Import BTRFS Pool. But best to not have it attached upon first boot after system and data disk transfer from machine 1 to 2 as then less to complicate that initial boot up.
Wow, I can’t win for losing. I tried import but failed to read the part about both pools. confused rockstor.
Gave up on it and re-installed. named the device imported the disks, created the shares. exported the data shares in samba and had no trouble mounting them with my Nvidia shield. All was happy and smiling.
Then I activated the system, stupid me…
ran yum update in cli so I could be 100% sure that it was complete it finished said that it was complete I ran yum update rockstor No packages marked for update. Went back to the web UI.
System is running the latest Rockstor version: 3.9.2-33
All good I thought. Started to tr and install rockons, confidently go to shares to create LL_Conf for LazyLibrarian, surprise!!!
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
File “/opt/rockstor/src/rockstor/storageadmin/views/share.py”, line 191, in post
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 1064, in share_pqgroup_assign
return qgroup_assign(share.qgroup, pqgroup, mnt_pt)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 1105, in qgroup_assign
CommandException: Error running a command. cmd = /sbin/btrfs qgroup assign 0/2134 2015/61 /mnt2//AcerRockStorage. rc = 1. stdout = [’’]. stderr = [‘ERROR: unable to assign quota group: Invalid argument’, ‘’]
Unbelievable. this time I lost all my data on the pool
As there have been a number of quota management fixes and your pasted error message is quota related, could you first confirm the actual installed Rockstor version via:
yum info rockstor
If you are indeed on 3.9.2-33 as the Web-UI suggested then I suggest that you initially disable quotas on the given pool:
If you are not then just do a:
yum update rockstor
I wouldn’t reach that conclusion just yet. If a pool (btrfs volume) or one of the shares (btrfs sub-volume) fails to mount, then the result looks to be an empty directory. To help others diagnose what is going on here could you please paste the output of the following commands:
This one to get a system wide btrfs overview=:
btrfs fi show
A Pools page screen grab might be nice also.
And to see what’s actually mounted you could also paste the output of:
Again a snapshot of the shares screen would also show the mount status of both the subvols and their associated pools.
My suspicion is that a quota issue has snagged a pool or share mount (this shouldn’t’ happen though) and so you are receiving a blank mount point, rather than the contents of a subvol. This does not yet mean the data is not there.
where if share_pqgroup_assign (the next thing in your log entry) fails, which it did via the CommandException, then the program does not proceed to the mount_share(s, mnt_pt).
Now this is not usual but directly after an update it can be the case that old code is still in memory. Not always but it can be. You may just be running into this. Note the italicised note in the Reinstalling Rockstor howto:
“N.B. given this is a new install it is advisable to reboot anyway to make sure all is well before doing the data import, this will ensure you are using all of what has just been updated.”
In which case it may just be that you need to reboot to ensure you are running all new code.
Past the requested outputs anyway and let us know how the post reboot goes.
I’d go more with a ‘highly unlikely’ given your described procedure. Most likely older code still in play and not playing nice with other code that has been updated: hence the
Unless of course the installer accidentally installed to one of your prior data disks:
From Reinstalling Rockstor:
“Ensure that the correct disk is ticked. Rockstor, with default boot options, will only show sda but if you see other disks due to custom boot options be very careful with where the tick is; there should be only one disk ticked.”
Always best to disconnect any data drives when doing a/an install / re-install though anyway, especially if they hold the only copy of any data.
Hang in there and keep calm, and remember to take note of all that you do as importing pools and transitioning from one machine to another and updating / re-installing entire OS’s / main packages is a non trivial exercise. Well done on the yum update move by the way: we should get to adding that warning soon really. With the iso now quite old there is now always a lot to go in during the initial package update.
Hope that helps and see how you go after a reboot.
OK, this nightmare is over. I ended up recreating everything both devices.
But putting the correct disks in the correct machine and adding the Rockons on the correct machine as well.
Not the end of the world as redundancy is my middle name.