Import BTRFS from OMV - no option to import

Hello All,

I am trialling Rockstor after using OMV for many years.

I need to ‘import’ the existing filesystem into the Rockstor system. I do not have the resources to transfer to new discs and star again. I need to import the data.

A picture shows the DISKS page and that they are all listed correctly. However, the 2 larger disks are part of the BTRFS made by OMV. The whole disks were used - not partitions, so I should be able to add them as a new pool.

The error, is that there is no option (no little arrow) to import te data. See the screen shot.

Am I doing something wrong? As I understand the instructions, the import button should be a down arrow next to the disk

Can anyone explain this to me, or point me in the right direction, to solve this please.

Phill

I haven’t tried this myself but I’d be very wary of adding them as a new pool. Perhaps the devs can enlighten us whether this would result in data loss? I don’t want you to take this as a criticism but if you don’t have the resources to dump and reload the data, does that mean you’re not backing it up? I’ve seen 4TB external disks at less than £150 UK price, surely it’s a price worth paying?

1 Like

@PhillVS First off: welcome to the Rockstor community forum.

Yes this does look very much like your drives are blank so as @paulsmyth states you should take no further action Disk or Pool wise within the Rockstor UI if you suspect they contain data. As this is how blank drives show up so they will be treated as blank which is obviously not what you believe they are.

Now onto if they are blank or not and if we are looking at a Rockstor bug, which would be good to know of course and your help in diagnosing this would be much appreciated.

Could you ssh into your Rockstor box as the root user or if you use the ‘System - System shell’ as admin first and then ‘su root’, then execute the following commands and paste the output here so that forum members can assess what state your drives are actually in:

btrfs fi show

and

lsblk -P -o NAME,MODEL,SERIAL,SIZE,TRAN,VENDOR,HCTL,TYPE,FSTYPE,LABEL,UUID

If you proceed and follow your pasted output with ``` on lines of their own then the output will be much easier to read (as I have done with this post for the commands).

These are the commands that Rockstor uses internally to assess drives and their partitions / filesystems.

All I can think of initially is that your drives are using some kind of LVM arrangement and that’s not something Rockstor knows anything about.

You could then see if after a:

btrfs dev scan

the outputs change at all. Thanks and lets see what’s going on.

As @paulsmyth has already covered, it is not advisable to make any OS or hardware changes using devices that hold your only copy of the date. OS installs are particularly risky in this sense.

For the time being stay away from all Disk, Pool, Share, and export parts of the UI and don’t attempt to make anything new as from Rockstor’s point of view those drives are blank and so it sees no risk in any further actions.

Hope we can help here and thanks for reporting your findings. Once we have the output of those commands we can assess more what’s going on.

Thanks.

1 Like

You are right. I don’t have a backup of this data. This is not critical data, although it would be a considerable nuisance to replace it if I had to. My critical data is in a synology NAS and all data is mirrored. I don’t touch that on pain of death. And I change those hard disks every couple of years by default. In fact, the drives in the OMV/Rockstor are ones which have been in the Synology for a couple of years and now store my media files.

i look into getting an external drive for this.

Thanks for the reply. As requested, here are the outputs of the commands …

[root@rockstor ~]# btrfs fi show
Label: ‘rockstor_rockstor’ uuid: 1aa58e65-e073-47d5-97c6-f310c23c4f99
Total devices 1 FS bytes used 1.56GiB
devid 1 size 927.15GiB used 4.04GiB path /dev/sda3

root@rockstor ~]# lsblk -P -o NAME,MODEL,SERIAL,SIZE,TRAN,VENDOR,HCTL,TYPE,FSTYPE,LABEL,UUID

NAME=“sdb” MODEL=“ST3000VN000-1H41” SERIAL=“Z300PS44” SIZE=“2.7T” TRAN=“sata” VENDOR=“ATA " HCTL=“1:0:0:0” TYPE=“disk” FSTYPE=“LVM2_member” LABEL=”" UUID=“HImDHq-5BUy-Kgbv-x6FU-ery9-lyXY-ofu4ud”

NAME=“sdc” MODEL=“WDC WD30EFRX-68E” SERIAL=“WD-WCC4N0827668” SIZE=“2.7T” TRAN=“sata” VENDOR=“ATA " HCTL=“2:0:0:0” TYPE=“disk” FSTYPE=“LVM2_member” LABEL=”" UUID=“2s2YrE-64Na-dI0M-mTz7-AK3O-mrNt-e5rdD0”

NAME=“sda” MODEL=“WDC WD10EZEX-08W” SERIAL=“WD-WCC6Y4CKJ2HK” SIZE=“931.5G” TRAN=“sata” VENDOR=“ATA " HCTL=“0:0:0:0” TYPE=“disk” FSTYPE=”" LABEL="" UUID=""

NAME=“sda2” MODEL="" SERIAL="" SIZE=“3.9G” TRAN="" VENDOR="" HCTL="" TYPE=“part” FSTYPE=“swap” LABEL="" UUID=“d4d62efb-562c-4ebc-8e5f-8bef31fcf84c”

NAME=“sda3” MODEL="" SERIAL="" SIZE=“927.2G” TRAN="" VENDOR="" HCTL="" TYPE=“part” FSTYPE=“btrfs” LABEL=“rockstor_rockstor” UUID=“1aa58e65-e073-47d5-97c6-f310c23c4f99”

NAME=“sda1” MODEL="" SERIAL="" SIZE=“500M” TRAN="" VENDOR="" HCTL="" TYPE=“part” FSTYPE=“ext4” LABEL="" UUID=“3c98d43a-5451-459a-ac59-cf03f26bba2a”

Looking at the output, I see that there is a logical volume involved (LVM2) and so I think I can conclude that Rockstor will not be able to handle this filesystem as it stands. This is a pity as I am a little frustrated with OMV and I had thought that Rockstor might be the answer. So far, I like Rockstor. Is there any way of converting the LVM2 to BTRFS on the disk? I suspect not, but its worth asking.

Thanks for the help.

PhillVS

No, I doubt very much this is possible. LVM is a volume manager will be carved up into partitions. You can install LVM and mount the file systems in a terminal which at least means you have a way to access the data. My advice is do that, get hold of an external drive and dump all the data off then wipe the disks and import them into Rockstor. This is the cleanest and safest way to do it.

@PhillVS Thanks for the command outputs and yes as you point out your drives appear to be whole disk LVM2 Physical Volumes. This is not something Rockstor has the ability to interpret but we definitely have a bug with regard to not at least recognising them as incompatible. I have create a issue to address this and linked back to this thread for context:

I see that @paulsmyth has just replied similarly (as I was in the process of preparing the following): I’ll post anyway in case it helps.

As for conversion this is an apples and oranges type of thing as LVM is only a Logical Volume Manager where as btrfs, is both filesystem and a volume manager. There is most likely a way to transition the data ‘in situ’ however it would require a few steps and would depend upon the original btrfs raid level employed and if it (and the data size) in turn allowed for the data to reside on a single disk only. That way you could use the freshly relinquished device and re-purpose it as a whole disk btrfs volume. Copy the data over from the newly single disk btrfs in logical volume in physical volume (LVM) to the fresh single disk native btrfs (no LVM underneath) and then, once the data was confirmed as moved you could wipe the btrfs in lv (LVM) and then wipe that disk and in turn add it as a member to the btrfs pool on whole disk you created earlier. It could then have it’s raid level changed if appropriate. So quite a few steps and very much dependant on prior raid level, amount of data, and command line competency. I would not advise this approach but if it is your only option for now then you either expand you options or look into how this might be done (if possible given the data size that is). The specifics are best left to your own research as it is imperative that you understand each step as you go.

Hope that helps and thanks again for reporting this issue, nice find. And good luck getting this sorted: careful treading is in order I would say.

1 Like

I’ve accidentally nuked a drive by doing exactly that on more than one occasion, last time it was re-installing proxmox on a remote machine via IPMI selected one of the data HDD’s instead of the OS SSD.

Thankfully once I got it installed and the VM’s restored I was able to boot rockstor and have it rebuild the raid 5 (which given the raid 5 bugs in BTRS I was pretty lucky)

1 Like