Failed to delete the share (tm). Error from the OS: Failed to mount Pool(seagate) due to an unknown reason. Command used ['/usr/bin/mount', '/dev/disk/by-label/seagate', '/mnt2/seagate']

Hi all,

I finally managed to get 4.0.7 installed on a RPi 400 and have been testing it as a simple file server with some USB hard drives. I took it offline for a few weeks, and in the interim removed the USB drives to use elsewhere. Starting it back up, obviously Rockstor complained of detached drives. I thought it would be as simple as removing the SMB entries from File Sharing, then deleting the shares/pools/disks but I’m stuck here:

Traceback (most recent call last): File "/opt/rockstor/src/rockstor/storageadmin/views/", line 381, in delete remove_share(share.pool, share.subvol_name, share.pqgroup, force=force) File "/opt/rockstor/src/rockstor/fs/", line 947, in remove_share root_pool_mnt = mount_root(pool) File "/opt/rockstor/src/rockstor/fs/", line 585, in mount_root "Command used {}".format(, mnt_cmd) Exception: Failed to mount Pool(seagate) due to an unknown reason. Command used ['/usr/bin/mount', '/dev/disk/by-label/seagate', '/mnt2/seagate']

Not sure what to try next and would appreciate any suggestions.

1 Like

@papaskitch Welcome to the Rockstor community.

Given you have removed the devices associated with the pool the associated disk entries should be showing as detached. So you can try either removing the detached members or their associated pools.

Rockstor likes to hang onto the knowledge of prior pools as that is how it identifies if members are missing. And if for example one has a part time pool that is say one disk and is only attached during some boot sessions, we need to keep our knowledge of that disk / pool / share in case on a subsequent boot it is re-attached. So we can’t just auto remove pools when all their members are detached. Hence the manual intervention.

I’m pretty sure we still have some buggy behaviour in this area thought. So try deleting the detached disks or the pool first. And if you could explain in more detail the system arrangement that lead to this potential chicken-and-egg situation we can hopefully generate a re-producer and get this sorted. As I say we may still have a bug in these missing all devices situations. But I’m pretty sure we fixed at least a couple of these kinds of corner cases way before the current 4.0.7.

Well done on the 4.0.7 Pi4 build by the way. I think you are the first to report such an install. Nice.

Hope that helps and let us know if removing the detached devices or the pool first helps to get you out of the phantom pool/share situation.

Hope that helps.


@papaskitch welcome to the forum. And great to hear a successful Pi 400 install - nice one!
I don;t have an answer to your issue re the USB drives. However I can report that I did something similar when I first tested v4 on a Pi4 last year (blimey, was it that long ago??) as I wanted to see how Rockstor reacted to such reckless treatment :slight_smile:
I seem to remember my install picked up where it left off basically. I believe Rockstor is reliant on device serial numbers, so assuming they had not changed for some odd reason I cannot think what might be the issue. Sorry!


@phillxnet, the RPi install was only possible with the straightforward instructions provided, so it’s really a testament to the Rockstor team. Thank you!

As for my issue, I should probably mention that I built the ISO using Leap 15.3 as the RPi 400 built-in keyboard wouldn’t work on Leap 15.2.

The steps I used to create are fairly straightforward (if only a little fuzzy - I wish I had been able to reply sooner). I attached 2 USB drives to the RPi, and created 2 separate pools. I then created two shares on one, and two shares on the other. I then added some SMB sharing to the mix under ‘File Sharing’ and was able to successfully backup via Time Machine to one, and serve some media files from the others. After that, as I mentioned in the first post, I put it temporarily out of service and repurposed the drives. This is all redundant data so I wasn’t worried about data loss, but did expect Rockstor to handle this use case (i.e. something like single drive failure on a single drive pool).

In any case, after starting it up again without the external drives, the first thing I tried was to remove the detached disks, but as far as I can tell there’s no option to delete in the UI (i.e. no obvious garbage can icon in the disk list like there is in the pool list). Clicking on the drive doesn’t present any actionable options.

So I tried deleting the pools which I believe is what gave me the error from the original post. I then tried deleting the shares with similar errors. Here’s what both screens look like now:

Then it occurred to me that I had some Samba shares active on these pools so I deleted those from ‘File Sharing’ and that worked without issues so I thought I could work in reverse. Unfortunately, I still can’t delete either the pools, disks, or shares in any sequence.

Happy to provide more info (logs?) if you think there’s something else worth trying. I’m not very technical, but can follow step-by-step instructions :wink:

Otherwise I’ll just re-image the boot drive and start over :smiley:


I’m not too worked up about it, and don’t mind starting over as I’m just playing around at the moment. It does look like a problem worth solving however as while I can likely plug the drives back in and delete them, that won’t always be possible for some.

If this is indeed a bug, and not just gross incompetence on my part :crazy_face:, I’m happy to help get it solved.

And @GeoffA, thanks for the warm welcome!


So, it turns out it’s me after all! When hovering over the exclamation point on the disk, I saw this tidbit:

I guess that’s the first step I should’ve taken! Under resize, there’s a remove disk link which is clickable and takes you to a page which allows removal of the device from the pool. Then the little garbage can icon appears next to the disk and can successfully be deleted! I should note, that I couldn’t delete this on that same pool page, I had to navigate to the drive list and delete from there.

Maybe there’s a way to make that more discoverable? Though I’ll likely never forget this one.



@papaskitch Thanks and glad the instructions worked out for you.

Interesting. We should probably add this to the docs somewhere.

Totally agree. And I’m pretty sure we ‘solved’ this one some time back. But we may have a regression. OK, so I was about to suggest the exclamation icon from you screen pic but it looks like you found it. That’s a relief. We did have issues such as you appeared to run into quite some time ago as our base disk management had some bugs in initially. So I was concerned and interested to see if we had some-how recreated them via a regression. But it looks like we are in the clear currently.

Yes, however this whole mechanism took quite some time to construct as our Web-UI has to deal with quite a number of permutations and I think, from memory, that this was the most immediate way-throught that I could find at the time. Without major changes that would in-themselves, likely introduce more bugs than they addressed. Likely we can improve on the documentation front and then have in-place help links “?” to the docs to help folks out that way. Always tricky to preserve what state we can while not making it too easy to accidentally remove something folks are expecting to return once a drive is re-attached.

Thanks for sharing your story / journey on this one. @Flox has just done a major documentation re-arrangement and I’m due, hopefully today, to get that merged and published. That way finding out Rockstor’s strange ways and means can be approached from that angle too.

Our current doc section for the Disks:
Very much looks to under-serve (read miss entirely) the exclamation icon, so prompted by your report I’ve created the following documentation issue thus:

Thanks again for the report. Much appreciated.

1 Like

I removed two small slow SSD’s, replaced with two large FAST SSDs, and now can’t delete anything in 3.9.2-57 version. Is this normal? What if I said BOTH SSD’s died at the same time (possibly due to meteors, alien abduction or whatever) and now I am stuck with the failed to whatever from every standpoint?

Can’t delete or change the share, pool, or disks. Tried every clickable thing to remove/delete and nothing works.

Does openSUSE version fix this?


@Tex1954 Hello again.

It’s not a known issue in either version. But the btrfs within our Rockstot 4 is literally years newer than that in our CentOS based version. Even the latest 3.9.2-57. Plus it gets updates from upstream as per the Leap OS base as we use the upstream kernel.

From the sounds of it you pool may be poorly. And poorly btrfs pools often go read only, hence not being able to do anythink as you specify. But in any case a newer btrfs both in userland and kernel is the way to go anyway. Hence our move to being “Built on openSUSE” in the first place.

Also to narrow down an issue we need specific info, rather than just I can’t delete anything. It would definitely help to know if an attempted delete of what resulted in whatever. I.e. does the Web-UI complain about not being able to complete something and in which case what is the complaint. Also log entries for the time a failed delete happened. But in all cases (bar having to downgrade and pin docker for the time being) you are better off with our v4 variant. And we are working on the docker IPv6 failure reposted elsewhere on the forum. Essentially we are looking to re-enable IPv6 which you can do for yourself easily enough if you are building a new installer anyway.

Let us know how you get on.

1 Like

Okay, I did some more testing because it seems the transfer speed could be improved. What I did was disconnect my main Raid-1 4TB drives, replace the Rockstor SSD with the Windows 10 SSD (have removable 6-drive contraption) and reset the two fast 512G SSDs to a windows Raid-0. Well, turns out for some reason the system is even slower than Rockstor, even when using a RamDisk!

So, I went back to Rockstor again and now I can’t delete the share,pool, or disks nor format or reinitialize them in any way, The SSDs remain unmounted and seems no action can take place unless they are mounted. This is true whether or not the disks are actually physically installed.

(Right click and “open image in new TAB” to see full size)

Of course, they were reformatted for Windows testing and remain so.


1 Like

@Tex1954 interesting finding re speed. Expecially since btrfs does lot more than almost all other filesystems with regard to check summing. Likely there is something else going on here.

Yes, Rockstor had issues a while ago with hanging onto old pools in all drives removed scenarios. And since 3.9.2-57 (from your screen pic) we have made some major improvements in this area. See:

from 14 months ago. 3.9.2-57 was release in April 2020 (16 month ago):

Release 3.9.2-57 · rockstor/rockstor-core · GitHub 16 months ago.

The recommended version for new deployment / performance testing is now 4.0.8-0 from a couple of days ago (I’ve not yet updated the downloads page which still states 4.0.7):
V4 Stable Release Candidate 9

Plus it’s btrfs kernel and userland is fully updated. But our 3.9.2-57 had older btrfs even on release. It’s one of the main reasons we embarked on this OS move from CentOS to openSUSE.

See how you get on with that one. We have also, in this latest version, worked around the known upstream docker/Rock-ons IPv6 issue we have had of late by re-enabling (but still not managing) IPv6.

Hope that helps.

1 Like

Well, now that I know it isn’t me, things can move forward.

Checked out downloading .60 version and I have no idea what to do with the files. It isn’t an ISO, so I am stuck.

Also, while trying to UNZIP or UNTAR the 4.08.00 downloads, I get nothing but “can’t open, is not an archive” error.

Any help for noob?



The last version our our source code, the tar.gz files, that compile/are compatible with a CentOS base, such as is used in our near legacy v3 variant, was 3.9.2-57. There after we ran into a python library issue in CentOS and our dependency on it. But we were well on the way to our v4 “Built on openSUSE” variant by then. We have also since then released our DIY installer solution for v4 in the following repository:

which is linked to, and recommended for new installs, at our downloads page on the main web site:

This is currently your first port-of-call to build your own installer. It will, as a consequence our the method we use (kiwi-ng), end up pre-loaded with all pending upstream updates. In other words the day you build you installer it will include all openSUSE distributed updates. The rpm version of Rockstor installed will currently be 4.0.8-0 (I changed this yesterday and updated the download page to have this newly released version as the recommended on.

We intend to make pre-build downloads available once we release our next official Stable release. But given 4.0.8-0 is Release Candidate 9 and we have now sorted the docker/Rock-on issue things are looking positive for 4.0.8-0 being our next Stable release. So there’s that :slight_smile: .

So in short, for now, you need to follow the instructions in the above GitHub repo to build your own “Built on openSUSE” Rockstor 4.0.8-0 version. It can in turn subscribe to testing or stable just as before and can import config and pools just as the CentOS one did.

We released CentOS and “Built on openSUSE” versions for a goodly while during this transition and had intended to continue this until re reached the next stable version. But allas our own tecnical dept and that of CentOS combined to defeat this goal. So as from 3.9.7-57 we could only release rpms (what the new installer auto includes/uses) that run on that openSUSE Leap base.

There are now may folks on the forum who have successfully built their own installer using that GitHub repos instructions and feedback on making it easier/clearer is welcomed. The instructions themselves have now undergone quite a few enhancements and as of just the other day it is now possible to build the installer using any relatively modern linux install. Our prior, and now secondary method, within the instructions required a Leap 15.2 instance, either ‘real’ or virtual within say a virtual machine. But as of a recent contribution from @kageurufu here on the forum, answering a long stand request within the Readme, we now have the far easier and more flexible boxbuild method which does the KVM (virtual machine) stand-up/setup for you on the fly.

Try building your own installer and report on your findings. It’s long winded computationally but only a few command on a modern linux install. We are very keen on having this build-you-own / DIY installer approach being well known and accessible as in the previous years of our attempting to release installers as often as possible and ‘gold stamping’ them they were almost always out of date the day they were released. This new DIY way, in keeping with our more technical audience (DIY NAS builders who install their own OS) ensures that each install build has all pending updates pre-installed. This makes for a far faster stand-up and also means folks start out on the latest kernel / btrfs updates form the get go.

In time we hope to arrange for an auto-build installers arrangement with the consequent download made available but this, as stated, is not planned until we are at the next stable release. All in good time and bit by bit.

Let us know how you get on with the new installer. There are 14 months worth of updates on all sides between 3.9.2-60 and 4.0.8-0. I mentioned 3.9.2-60 specifically due to it’s inclusion of a potential fix for your reported issue. But you also want all inbetween fixes ideally. Plus there’s that several years newer kernel and btrfs stuff that is key to your experiments that comes with the move from our CentOS base with elrepo kernel 4.12, to the bang up to date (with backports) Leap kernels and openSUSE/SLES’s backports specifically of the btrfs. CentOS never really did this hence our use of the elrepo kernel. That was all way more hacky than what we now do in the v4 (which actually started work in the 3.9.* eara. See the beginning of the following thread for some background:

and it’s pre-cursor:

But your next port-of-call to test our modern offering is the GitHub repo with the instructions on how to build you own installer with all updates pre-installed; and specifically it’s

Have a read, and a go, and report any difficulties in a new forum thread. The hopefully those with experience in this process can chip in and get you sorted. You will then, there after, be able to easily build a fully updated installer on-the-spot. This is particularly useful for getting a fresh, fully updated install off-to-the-races as it were.

Hope that helps.

1 Like

Well, I was always looking for a “Turn Key” installation. I am hardware guy with DSP/MCU design experience in machine control/robotics, not data processing or MMI/GUI oriented.

Instead of forcing me to spend endless hours learning how to setup Linux for a development environment and building an appliance, why not simply release an ISO I can simply install like you did on 3.9.2-57?

Why must I install Kiwi on a linux setup and figure out how to do this all myself? (I know, the instructions are so simple a child could do it!)

I am a user in this regard, not a developer. It is also why I rejected the 2016 version where I bought a USB install stick, tested it and found it unsatisfactory… did the same for freenas back then and still do not like ZFS in any case. In Fact, my “dated” hardware was built on freenas recommendations and just sat around after testing until recently. Surprisingly, the hardware still works perfectly.

So, in the first place it was made clear that at some point 4.xx would get released. IF it gets released on a build your own basis, then I have potentially wasted my time already.

I have to think about this some more, there are other things that occupy my time at the moment… like surgery to keep my legs from falling off, home remodeling, and such other minor things.

We will see… I already have a Leap setup installed and the first problem I had was couldn’t remote into it from Windows with TightVNC. It’s all those “special” bugs that keeps Linux out of mainstream. Imagine if every user had the same problems with Windows or Mac O/S.

Sigh, now I have to start over anyway so we will see how it goes… firstly backing up with Windows again.



Not required for the installer build, the ‘developer’ environment instructions are here:
Built on openSUSE dev notes and status: Built on openSUSE dev notes and status and are significantly more involved.

So agreed this is frustrating all around, but we are working on it. For context: my predecessor (as project maintainer) released the ISO build instructions for our v3 installer 6 years ago:

in that time we have had exactly 1 external contributor; and it wasn’t me or @Flox, and 8 commits.

Our current installer has now had input from 7 folks in addition to me in just over a year with 100 commits (revisions sort of). We are trying here to foster informed developer contribution in our ‘method’ before we open our pre-built installer to the general public.

So I apologies for all the frustration, and good luck with your other concerns, legs etc. This release has been a long and drawn out process that I will be please to put-to-bed and move onto the next testing channel development. But in the mean time we have fostered greater community input, massively updated/changed our base OS, just as well given CentOS’s recent demise, and hopefully moved much closer to a “Turn Key” DIY NAS appliance for the masses. Oh and saved Rockstor from the fete of openfiler: unmaintained. Open source projects are easy to start and similarly die easily, they are far harder to maintain. And you have personally assisted in this maintenance. So thanks again for your continued and long term interest in our project. I’ll PM you with pre-build installer availability as we move from private beta to pre-release on that front.

Thanks again for sharing your findings and again I apologies for mis-reading your interests and our oh-so-nearly ready status that wasn’t quite ready for you.

Cheers and hang in there.


Thanks for the input… as an exercise, I installed Leap 15.3 on my system and created a Raid-0 with the two fast SSD’s in there (all else disconnected), and then spent hours trying to get SAMBA and the network to hook up properly using all the YaST stuff and instructions online… Had not much trouble creating the SSD Raid setup, just could never see or be seen on the windows network…

Alas, a few days and many hours later, it was just another Linux lesson in futility. Nothing worked.

I really can’t wait for the 4.x version of Rockstor to release… really looking forward to it.


1 Like