I checked my torrents this morning and there seemed to be a permissions error with the share, so I tried to change it in the UI to 777. This threw an error about how it was read only, but still showed all my torrents they just didn’t donwload and were frozen. So I rebooted using UI and when rockstor restarted it had no shares.
One of my disks has corrupted sectors and I have a replacement ordered.Additionally another disk - my /sda disk emailed this error in the error that it had 6 pending sectors. Additionally it showed smart errors in it’s history.
Device: /dev/sda [SAT], 6 Currently unreadable (pending) sectors
For context, a recent change has been installing pi-hole docker and plexrequests docker as well as accessing and mounting samba shares to my ubuntu desktop (I know samba isn’t for linux to linux typically but have mostly windows machines and am lazy).
Any advice or fixing this? Surely the data on the drives isn’t lost suddenly. I have backups of most things, but my tv and movies are too large to backup. Any additional logs or data needed?
I have ssds I could swap out for the cheap hdd that is my rockstor system drive, but I want to know that a fresh install of rockstor would have the best chance of recovering my data before I attempt that.
[root@warehouse13 ~]# btrfs fi show
Label: ‘rockstor_rockstor’ uuid: bf814599-448b-4bb6-8b7d-52f89ecbebb3
Total devices 1 FS bytes used 8.93GiB
devid 1 size 53.17GiB used 14.02GiB path /dev/sdb3
warning, device 2 is missing
warning, device 3 is missing
bytenr mismatch, want=3950535573504, have=0
Couldn’t read chunk tree
Label: ‘media’ uuid: 5b3a7f28-0ad4-4c44-830a-a0037b6fb9b7
Total devices 5 FS bytes used 4.54TiB
devid 4 size 5.46TiB used 2.33TiB path /dev/sdd
devid 5 size 1.82TiB used 1.55TiB path /dev/sdc
devid 6 size 3.64TiB used 2.33TiB path /dev/sda
*** Some devices missing
@coleberhorst Hello again. From the looks of it your filesystem went read only due to errors, this is typical behaviour for btrfs when faced with errors (either via bugs or corruption):
Your files still showed at that point as the fs was still mounted, all be it read only.
From your btrfs fi show output it would seem that you have 2 missing devices in your ‘media’ pool:
Yet we have only 3 listed:
This is bad as only btrfs raid6 can withstand a 2 drive failure and that has it’s own issues.
Your only option with 2 devices missing (assuming this is not down to cable issues or the like) is to attempt a restore to another mounted filesystem: https://btrfs.wiki.kernel.org/index.php/Restore
and get whatever you can off. Or if it is a cable slipped out of a remaining (good) drive then you might be lucky and be able to (after the restore attempt) mount degraded and get what you can sorted then. Ie data off or a missing device deleted (data size an raid level allowing). Note that a degraded mount in addition to the rw option will be required for changes/repair and this is a one shot deal so plan carefully and get all that you can off first.
Ok swapped some connections and cards around and got them all to show up. Also my new 6TB drive arrived in the mail so I have a blank drive to add. However, the old shares are still not mounted and I didn’t want to add the new 6TB drive without confirmation.
[root@warehouse13 ~]# btrfs fi show
Label: ‘rockstor_rockstor’ uuid: bf814599-448b-4bb6-8b7d-52f89ecbebb3
Total devices 1 FS bytes used 8.93GiB
devid 1 size 53.17GiB used 14.02GiB path /dev/sdd3
Label: ‘media’ uuid: 5b3a7f28-0ad4-4c44-830a-a0037b6fb9b7
Total devices 5 FS bytes used 4.54TiB
devid 2 size 1.82TiB used 1.55TiB path /dev/sda
devid 3 size 1.82TiB used 1.55TiB path /dev/sdb
devid 4 size 5.46TiB used 2.33TiB path /dev/sdf
devid 5 size 1.82TiB used 1.55TiB path /dev/sde
devid 6 size 3.64TiB used 2.33TiB path /dev/sdc
Self testing shows errors, but they still pass SMART and show up in btrfs fi show.
Big questions are how do I attempt to remount these old shares? and what order should I try these steps in?
Gave up and reloaded everything except my tv and movies from backup haha. Was unable to restore using the btrfs recovery mode. It appears I had two disk failures but I will do some more testing on the disks I removed. Added another 6TB for 6,6,4,2TBs total now. Probably going to add another 6TB, would you recommend Raid10 again?