Lost a single share

Hi there, I am new to rockstor and took over a server (24x16TB Drives) from my former colleague. I updated it to 3.9.2.53 (from .50) and had to reboot some times, cause it performed not very well (slow and frozen GUI). Now it says one share is not mounted and I can´t access files via smb. The pool itself is mounted to /mnt2/Data and I can access the share folder via ssh (winscp or putty). The data of the unmounted share seems to be ok.
As I can see the share has another mountpoint in /mnt2/ and of cause it is not mounted there.
Trying to add a new share to the pool Data brings this error:
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/share.py”, line 181, in post
add_share(pool, sname, pqid)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 584, in add_share
toggle_path_rw(root_pool_mnt, rw=True)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 545, in toggle_path_rw
return run_command([CHATTR, attr, path])
File “/opt/rockstor/src/rockstor/system/osi.py”, line 120, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/chattr -i /mnt2/Data. rc = 1. stdout = [’’]. stderr = [’/usr/bin/chattr: Read-only file system while setting flags on /mnt2/Data’, ‘’]

What to do, to bring the share online again?

@HansFranz Welcome to the Rockstor community:

For larger file systems having quotas enabled can be unworkable. First be sure of the version your are running via:

yum info rockstor

As the temp channel had a bug which caused available to be show as installed:

The reason to make certain is that only the Stable Channel can have quotas disabled and still function.

Once you have ensured you are actually running latest Stable you can disabled quotas via the Web-UI, the above forum link has a picture of where to do this.

Presumably this means you can see the suspect shares data when accessing it from it’s pool mount point, ie /mnt2/Data/share-name
But the dedicated mount for that share:
/mnt2/share-name
is empty.

In which case you may have a poorly pool and it has gone read only, which is what the following could be suggesting:

Or were you trying to re-create a share by the same name. That is not wise as the share would seem to be already prescent form you initial report of finding it’s date. Btrfs pools present their subvolumes date within subdirectories. But these subdirectories can be mounted as if they are whole filesystems in their own right. So the question is why is the original share not mounting. This could be down to pool issues.

So this looks like the the data is visible via the pool sub directory but it’s btrfs subvol is failing a mount.

Important:

Are you trying to add a share that has existed before? Are you trying to add the failed-to-mount share again (unwise).

I suspect your pool has gone read only, which is what they do when they discover a problem. This is to protect the existing data from corruption where possible.

So first off make sure the yum command states the real installed version as you understand it.

The folks on the forum may be able to take it from there.

The output of the following would also be useful initially to let folks know of the general arrangement:

btrfs fi show

And with such a large array you most likely are going to have to disable quotas, but not until you have confirmed the rockstor version number from the terminal. This situation is likely to imporove once we have completed our move to our pending Built on openSUSE version were we will inherrit many quota related fixes and speed-ups.

So do nothing until you have confirmed the version number as if really a Stable channel install then you can disable quotas from the command line and that might actually be the only problem as with an array this size and if you have many files / snapshots / lots of data (likely) the system could go so slowly as to block the mount of the various shares. And this happens on boot, and you stated flaky behaviour and non responsive Web-UI during boot.

So either poorly pool or so slow the mounts failed, but in any case you need to try disabling quotas while you find out more, but you can’t do that unless you have Stable release and if you are only reading from the Web-UI and we don’t know if this has ever been a stable (and you may not know either as you state having inherited it) then best check to be sure, then disable them if you can. But if pool is read only then you may well find out and will then have to take it from there.

Hope that helps.

Hi there, I did nothing you listed as unwise.

yum info rockstor told me this:

Installed Packages
Name : rockstor
Arch : x86_64
Version : 3.9.2
Release : 53
Size : 85 M
Repo : installed
From repo : Rockstor-Stable
Summary : Btrfs Network Attached Storage (NAS) Appliance.

btrfs fi show
Label: ‘rockstor_rockstor’ uuid: youdontwannaknow
Total devices 1 FS bytes used 4.08GiB
devid 1 size 228.40GiB used 30.02GiB path /dev/sda3

Label: ‘Data’ uuid: youdontwannaknow
Total devices 23 FS bytes used 82.09TiB
devid 1 size 12.73TiB used 9.37TiB path /dev/sdm
devid 2 size 12.73TiB used 9.38TiB path /dev/sdh
devid 3 size 12.73TiB used 9.38TiB path /dev/sdy
devid 4 size 12.73TiB used 9.38TiB path /dev/sdf
devid 5 size 12.73TiB used 9.37TiB path /dev/sdv
devid 6 size 12.73TiB used 9.38TiB path /dev/sdo
devid 7 size 12.73TiB used 9.37TiB path /dev/sdp
devid 8 size 12.73TiB used 9.38TiB path /dev/sdz
devid 9 size 12.73TiB used 9.37TiB path /dev/sdw
devid 10 size 12.73TiB used 9.37TiB path /dev/sdl
devid 11 size 12.73TiB used 9.38TiB path /dev/sdk
devid 12 size 12.73TiB used 9.38TiB path /dev/sdj
devid 13 size 12.73TiB used 9.38TiB path /dev/sde
devid 14 size 12.73TiB used 9.37TiB path /dev/sdi
devid 15 size 12.73TiB used 9.38TiB path /dev/sdx
devid 16 size 12.73TiB used 9.38TiB path /dev/sdd
devid 17 size 12.73TiB used 9.39TiB path /dev/sds
devid 18 size 12.73TiB used 9.39TiB path /dev/sdt
devid 19 size 12.73TiB used 9.37TiB path /dev/sdc
devid 20 size 12.73TiB used 9.38TiB path /dev/sdn
devid 21 size 12.73TiB used 9.37TiB path /dev/sdq
devid 22 size 12.73TiB used 9.38TiB path /dev/sdg
devid 23 size 12.73TiB used 9.38TiB path /dev/sdu

Label: ‘Austausch’ uuid: youdontwannaknow
Total devices 1 FS bytes used 508.00KiB
devid 1 size 232.89GiB used 2.02GiB path /dev/sdb

Quotas are enabled on all pools.

Should I disable quotas and everything is fine with the read-only “Data”?

@HansFranz Hello again.

I think that would be a good place to start, now we know you are not affected by the testing to stable update issue where it miss-reports your installed version as the available.

Only need to do this on the Data pool as it’s likely the most problematic. Sometimes it can take minutes to mount a volume of that size and this may be throwing Rockstor’s mount during boot. But we don’t know that yet. And if you find you can’t disable quotas (via Web-UI) the message may tell us something and if it does work then your system will be easier to work on to proceed with more diagnosis. I’m not necessarily going to be able to take you through all this but hopefully others can drop as more info comes to light.

If after disabling quotas and giving it a few minutes to settle you might find out more what’s going on after a reboot. Also a picture of the Data Pool details page may be good as it gives device error reports, the raid level used etc. Btrfs often goes read only when it detects an error so if there have been records of errors that may tell us something. Also some issues don’t come to light untill after a remount, and in Rockstor land that’s a reboot.

But before anything else disable quotas, if you can, and let us know what happened. If the pool is read only then it wont let you disable quotas as this need write access. If it does let you then you will have a better time working out the problem as the system will be more responsive.

Hope that helps.

Because I don´t have the time to go deeper into rockstor and not feeling well with an unstable storage, I´m transfering the files via ssh and then will build a new storage with something I know better. Thanks so far.

@HansFranz No worries, I hope you NAS ventures go well.

But do keep us in mind post our Stable channel openSUSE offering (quite soon now) as we will then have an upstream maintained enterprise backed (SLES) kernel and btrfs stack, which we currently lack, and a boot to snapshot capability for emergency roll back of the system.

Also note that your COW / check-summing file system options are limited currently on the DIY NAS front.

Hope that helps.