Disk unmounted data still there

Hello All,
I setup a rockstor sever yesterday and ran into an issue.my disks are there but the shares are not mounted. I was in the process of doing a backup when I saw that the pool was a single array of three drives so I coverted it to raid 5 and waited maybe 2 or 2.5 hours for the 500gb of data to be figure out then rebooted and now the shares are not mounted.
anyway love the software and hopeful this is recoverable.
Thanks Paul

@pcgrubb127 Welcome the the Rockstor community forum.
Re:

Yes, single is not a good choice (except in very specific instances); however the parity raid levels of btrfs raid 5 and 6 are only a little better given their non production status in btrfs currently.

See our Redundancy Profiles doc section for a fairly current overview:
https://rockstor.com/docs/interface/storage/pools-btrfs.html#redundancyprofiles

What you may be seeing here is our upstream’s (openSUSE’s) decision to make the parity raids (5 & 6) read only by default. We do not second guess this decision given they are the experts in this matter. I’m then surmising that you have an in-flight conversion that ended up being unmountable in rw. It may be you can add a custom mount option of ro to retrieve your existing data. All depends on if you were populating at the time or your problem pool is the source of truth.

However the work-around that we suggest, when folks insist on parity raid use, is indicated in the above doc reference in the note below the profiles explanation: essentially installing a newer upstream kernel (also prepared by openSUSE). This way we have folks use a newer kernel - with the hope that this will increase the chance of a stable experience re the parity btrfs raid levels. And because parity raid is not disabled there by default. Not ideal but it is the situation we find ourselves in re btrfs and its parity raid levels. If you take this route, via the doc linked HowTo, linked here for context:

https://rockstor.com/docs/howtos/stable_kernel_backport.html#stable-kernel-backport

be sure you understand what that howto states - and to follow through on all the elements there - appropriate to your openSUSE version. And note that the howto also updates the user-land btrfs - again supplied as a back-port by the openSUSE folks.

Alternatively you could not use the parity raid levels of 5 & 6.

If you are using a more recent Rockstor version - likely if you have recently downloaded the installer. Then you could also entertain a mixed raid setup where data is stored in btrfs-raid 5 or 6, but metadata is in say btrfs-raid1c3/4. But again, this still requires the stable kernel backport.

We introduced mixed raid capability within the Web-UI at version 4.5.9-1 onwards:

So this is available now in both our stable channel (4.6.1-0 latest):

and our latest testing channel (5.0.3-0):

However the latest testing releases is still a little on the young side - with many parts still in flux (read being updated).

What version are you currently running incidentally? And what is the openSUSE base version. the latter should be Leap 15.4 as that is our current target until we make a few more developments in testing. Thereafter we will be moving to 15.5 as a preferred base.

Hope that helps.

2 Likes

Hello Thanks for the response.
right now all the data on the array is backed up so I have the ability to start over at any time. however I would like to know what I did wrong I only saw RAID levels as options for the pairty types. I am on a leap 15.4 where would I find the way to set thing up property even if that means wiping and starting fresh is fine but I want to learn.
Thanks Paul

@pcgrubb127 not entirely sure about your question, you are looking for which one to pick of that list?


With three disks (if I read that right above), you can pick any of these:

raid1
raid1c3
raid5
raid6
raid5-1
raid5-1c3
raid6-1c3

If you trust the stability of raid5 or raid6, I would at least go with the 1c3 options (aka for the metadata it’s a raid1c3, meaning 3 copies of Metadata) of either of them. Otherwise, I’d stick with raid1c3 (both data and metadata are using the raid1c3 setup) or if you add another disk (i.e. at least 4 data disks) you could use raid10-1c3

The changing of RAID levels can definitely take a very long time.
On another note, I’ve been using the backported Kernel approach for a pretty long time (as @phillxnet mentioned above) … more frequent updates to be applied that require a reboot (especially due to the Kernel versions being updated all the time and btrfs tools likely less), but have been happy with it (of course I did not have any catastrophic failures either).

Hope, that helps.

2 Likes

hello All,
Thanks for the great information I am switching to a raid5-1c3 and I hope that I don’t have to recopy all my data but I still on the old machine so no big deal if I have to recopy.
Thanks again

1 Like

Hello again,
I decided to wipe it and try again with tumble weed version.
maybe that was my mistake but I can access any of the shares like last time.
I get a windows is not able to access this location error and nothing shows up. I set it up the
same way as before. anyway looking for more support.
Thanks you for your assistance in advance.

Hello
I thought I would leave this behind incase someone else had this issue.
the Ip address was misconfigured and was a \32 not a \24. I try changing it,
but it would also reset so I remove the bond between the ethernets and now
everything works.
Thanks guys

2 Likes