Sorry for the provocative title. I am running into so many issues with btrfs, I feel like posting under a different username so as not to appear incompetent…
So, here is my story:
I had 4 disks (2, 1.5 ,1.5 ,0.5 TB), that I wanted to use for a raid1 or maybe raid5 setup as home-nas, braswell based. Since two of these disks (each 1.5 TB) were used in an existing raid1 with all my important data, I wanted to do a step-by-step build of the btrfs array. I have some experience on btrfs, but only with single devices and non crucial data- so far without any problems.
I started to use the 2 and 0.5 TB in single mode, copied all my data over and then proceeded to integrate the first 1.5 TB disk into the array - the second one I kept as fallback if anything should go wrong.
-
Issue - I ran into the extent_tree problem, causing my server to crash a few times (I assume that was the reason), So I think my balance might have never completed after converting this now 3-disk-array to raid1- technically, that shouldn’t be a serious problem for my
-
Issue - my freshly added 1.5 TB disk appears to have died. Rockstor didn’t report anything, but silently failed to mount the array. Maybe it was because of a few hard resets I/Linux had to do, maybe it was a random failure, but my array just wouldn’t mount. After a while I noticed one disk emitting clicking noises in certain conditions, fdisk -l also sometimes didn’t recognize that device. (I restarted a couple of times to narrow down the issue) Removing the disk and putting it in an external case just told me the disk was not responding reliably so a SMART test wasn’t really indicated.
In order to make use of btrfs famed recovery abilities and to keep this as hassle free as possible, I decided to simply back up my important data, and adding my fourth, working drive to the array. Only I couldn’t, because now I ran into the
3. issue ;tldr; - Apparently, under certain circumstances when a disk goes missing one cannot mount a btrfs in rw mode, so you cannot even add a new disk, even though there’s no data missing. (There shouldn’t be anything missing in my case and I can see and copy my files in ro mode) So, I didn’t lose any data- nice. But I can’t do anything to progress from here, because of the ro-mode since. Reading up a little, this might be caused by my converting and balance-‘pausing’ earlier as well
- issue - I am unable to remove the faulty filesystem from rockstor because of snapshots on some shares - read-only->so no snapshot deletion-> no share deletion-> no pool deletion. I could do with simply nuking the remaining filesystem, and rebuild the array from scratch, but all the names, especially my share names are then still in the Rockstor database and “taken” and I’d have to reconfigure the clients I already set up. I’d rather reinstall rockstor.
So, many of my errors are simply btrfs or hardware related, and I guess we all now what we’re in for using an “unstable” filesystem - I still think it’s a good solution and I also like rockstor, but I am looking for ways out of here - I’d lbe content with the possibility to remove the shares from within rockstor - maybe an option to delete an entire pool. I’d like to avoid reinstalling rockstor from scratch…
Thanks for reading