Cannot import pool from RAID 1

Hi,


I attached two data hard drives and created a btrfs pool (Backup) in RAID 1 configuration under Rockstor 3.5-9. 

After populating the data drives I unfortunately had to re-install Rockstor. I upgraded to 3.5-9 again and re-attached the two disks from the pool mentioned above (Backup). 

In the Storage/Disks page I can see the two disks, sda & sdb both of which have been identified as having a btrfs filesystem on them - “Disk is unusuable because it has BTRFS filesystem on it, click to wipe.” 

If I click the import data button on either drive, I get this:-


Error!


Failed to import any pools, shares and snapshots that may be on the disk: sda. Import or backup manually

Rocktor.log contains:-
[29/Jan/2015 09:56:48] ERROR [storageadmin.util:38] request path: /api/disks/sda/btrfs-disk-import method: POST data: <QueryDict: {}>
[29/Jan/2015 09:56:48] ERROR [storageadmin.util:39] exception: Failed to import any pools, shares and snapshots that may be on the disk: sda. Import or backup manually
Traceback (most recent call last):
  File “/opt/rockstor/eggs/gunicorn-0.16.1-py2.7.egg/gunicorn/workers/sync.py”, line 34, in run
    client, addr = self.socket.accept()
  File “/usr/lib64/python2.7/socket.py”, line 202, in accept
    sock, addr = self._sock.accept()
error: [Errno 11] Resource temporarily unavailable

Note: I can manually mount the btrfs filesystem - “mount /dev/disk/by-label/Backup /mnt/Backup” so the filesystem is ok. 

I assume that there is a bug here. Is there any way in the short term that I can manually create a pool from the filesystem? 

Thanks

I can tell you some simple code modifications and steps to recreate the pool without losing data. Do you have any Shares and data on them? If you don’t have any data, just wipe and recreate instead of import – you probably know this already, but just saying.

If you do have shares(aka subvolumes), they can also be recreated with a bit  of tweaking. Let me know. I sure hope you don’t have snapshots to restore :slight_smile:

The steps to recreate the pool would be great, thank you. 


The drives contained a lot of data in 1 pool, 2 shares and 0 snapshots. Phew!

Thanks for the help!

Try this and let me know. I tried to be very explicit and thorough, hope it works. Soon, we’ll make pool/share import automatic.

1. Get your pool name. “btrfs fi show” lists all pools. pool name is the Label. Yours should be Backup.
2. The output also lists devices in that pool, you probably know disk names already, but just confirm.
3. raid level is presumably raid1
4. Let’s also make sure that you haven’t mounted the pool manually before. From your comment earlier looks like you mounted it under /mnt and not /mnt2. Rockstor mounts pools under /mnt2. So umount any manual mounts. If you did mount Backup as /mnt2/Backup, you can leave it alone.
5. You don’t have to worry about compression or extra mount options you may have had from before – these can be reset later.
6. Turn off rockstor – systemctl stop rockstor
7. Open(to edit) /opt/rockstor/src/rockstor/storageadmin/views/pool.py (first make a copy of it just in case)
8. Line #191 should be “add_pool(p, dnames)”. Just comment it out by adding a # in front of the line. it should look like “#add_pool(p, dnames)”. Make sure you don’t break the indentation, it’s Python. maybe you already know this, I’m just being extra explicit.
9. Open(to edit) /opt/rockstor/src/rockstor/fs/btrfs.py (first make a copy of it just in case)
10. Line #615 defines a function that looks like:

def wipe_disk(disk):
    disk = (’/dev/%s’ % disk)
    return run_command([WIPEFS, ‘-a’, disk])

9. Change it to look like this (basically make it do nothing)

def wipe_disk(disk):
    return True

10. Open(to edit) /opt/rockstor/src/rockstor/storageadmin/views/disk.py (first make a copy of it just in case)
11. comment out line #48 and #49. These lines should look like the following

def get_queryset(self, *args, **kwargs):
        #do rescan on get.
        #with self._handle_exception(self.request): <— line 48
        #    self._scan() <— line 49
        if (‘dname’ in kwargs):

12. Turn rockstor back on. (systemctl start rockstor)

13. Manually mount the pool with the command: mount /dev/disk/by-label/Backup /mnt2/Backup. Note that the mount point is /mnt2 and not /mnt like you mentioned in your earlier comment. If it’s already correctly mounted, move on.

13. Go to the Storage tab of the web-ui. You should see the two drives. Click on the wipe/eraser icon. This will just update the state of those disks without really wiping the disks because of the change you made in steps 9-11.

14. Now create the pool. Make sure you choose the same name, disks and raid level(raid1) as before. Just double check everything before hitting submit. It should happily create the pool.

15. Now, on to share restoration. Before we do anything turn off rockstor. systemctl stop rockstor

16. Execute the following command and you’ll see your shares: btrfs subvol list /mnt2/Backup. Gather all the names.

17. restore all changed code files back to how they were before. (/opt/rockstor/src/rockstor/storageadmin/views/pool.py, /opt/rockstor/src/rockstor/storageadmin/views/disk.py, /opt/rockstor/src/rockstor/fs/btrfs.py)

18. Turn rockstor back on. systemctl start rockstor

19. Go to storage tab of the web-ui and you should see the disks with right pool name populated in the table.

20. Now, create shares one by one. Just pick the right pool(Backup) and other fields don’t matter much. It should just work.

Ok, so since I care about the community so much :), I tested this out to make sure these steps work. However, please don’t hate me if they don’t work. Even if it bombs, your data should still be intact and we can further troubleshoot.

Thanks for taking the time to put all of that together and test it. Much appreciated! :slight_smile:


Everything has worked fine up until the creation of the shares. When I tried to recreate the share using the same name that it previously had, the gui would only allow me to select a maximum size that was the remaining space on the disk rather than the currently/previously used space. 

A share has therefore been set up with the old name and this appears to export correctly and is usable, however the GUI reports the size of it incorrectly. I can live with that for the short term - not sure if there will be any side effects due to that though (apart from not being able to monitor the size clearly :slight_smile: )

I did try deleting it and recreating it, but to no avail.

Again, many thanks!

Going to try and rebuild it. Will be the cleanest way. Thanks for all the help!