@Hooverdan no worries, I appreciate the reply. I think it ended up being windows clients with file manger open. After I got those closed the umount went with no problem. Got the pool remounted degraded and the errors went away in the Rockstor UI. Which I assumed was a good sign. I wasn’t able to remove the missing device 3, so instead I focused on removing the other drive that had a SMART warning of impending failure. I’ve got one more disk I’m removing and then when that finished I’m going to focus on device 3 again. I guess I’ll mount in normal mode and see if I can remove it then. If I do a btrfs fi show command it doesn’t even come up in the list. So when I try and do remove by device number or by missing I get an error that it doesn’t exist. I’m beginning to think it
@Hooverdan @phillxnet I wanted to add one more message to wrap up this thread. My pool appears happy again, with no loss of data. Thinking back to everything I did I think I actually removed Device 3, and didn’t realize it. After getting errors trying to remove it using the device name I tried using the device number and it appeared to hang, so I closed my ssh session, and got on with my day. After I came back to it is when I received the no device 3 found errors when I tried to remove it. So my assumption is, I was impatient and after I walked away the remove finished. After that I used the UI to remove the other 4TB drive that had smart errors, as well as the 12TB drive I had added during troubleshooting that is now gross overkill. Now when I do a show fi command it looks like this:
[root@jonesville ~]# btrfs fi show /mnt2/Jones-Pool
Label: ‘Jones-Pool’ uuid: 101993b1-827b-4611-b353-ba4b53691911
Total devices 5 FS bytes used 2.66TiB
devid 1 size 3.64TiB used 1.07TiB path /dev/sde
devid 2 size 3.64TiB used 1.07TiB path /dev/sdf
devid 4 size 3.64TiB used 1.06TiB path /dev/sdg
devid 6 size 3.64TiB used 1.06TiB path /dev/sdd
devid 7 size 3.64TiB used 1.06TiB path /dev/sdc
I’m guessing when I reboot it will renumber the drives and I’ll have a 1 through 5. After a reboot assuming no surprises it’s time to make the LEAP to SUSE based Rockstor. Now my only dilemna is do I load 4.6.1 or wait for RC10 to hit the street. Thank you both for all your help. The CentOS version may be old, but clearly it still does the job, can’t wait to enter the modern age of Rockstor, you really produce a quality product. Looking forward to contributing.