Failed to mount share due to a low level error

Good Morning people

Today, i can’t log into the folder. I can’t modified either, and return this error.

        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/storageadmin/views/”, line 253, in put
mount_share(smb_o.share, mnt_pt)
File “/opt/rockstor/src/rockstor/fs/”, line 661, in mount_share
pool_device = get_device_path(share.pool.disk_set.attached().first().target_name)
AttributeError: ‘NoneType’ object has no attribute ‘target_name’

@tonyhr0x Hello there.

Can you give us some info on your current version of Rockstor etc. I.e. have you recently updated it. Are you on the testing channel updates or stable etc.

This can all help with folks helping you here.

Also are there any red messages in the Pools overview or in the Web-UI header etc. Does a pool scrub work for example.

Hope that helps.

1 Like

Ho @phillxnet
I’m using a 4.5.1 version, i also updated all modules. I haven’t a stable version.

If i navigate into a pool, i see all folders and files, but i can’t move into an another pool or disk.

@tonyhr0x Helo again.

4.5.1 I don’t recognise this version! See here for our testing releases:

Could it be you are running our last stable release included in an installer:

If so then there has been many changes since then. And over a year of development.
We are now approaching our next stable release.

Can you post the output of the following command run as the root user:

btrfs fi show

Just so folks here know the number of pools drives etc.

This may help others clarify your situation.

What we need is more info on your setup and if the pool with problems is actually poorly. Hence my request for you to try a scrub of that pool.

Hope that helps.

1 Like

sorry, this is 5.3.18-150300.59.106-default.

In Software Updates return this error

        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/storageadmin/views/”, line 262, in post
return Response(rockstor_pkg_update_check(subscription=subo))
File “/opt/rockstor/src/rockstor/system/”, line 325, in rockstor_pkg_update_check
version, date = rpm_build_info(pkg)
File “/opt/rockstor/src/rockstor/system/”, line 121, in rpm_build_info
raise e
CommandException: Error running a command. cmd = /usr/bin/yum info installed -v rockstor. rc = 1. stdout = [‘Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, repoclosure, repodiff, repograph, repomanage, reposync’, ‘YUM version: 4.7.0’, ‘cachedir: /var/cache/dnf’, “User-Agent: constructed: ‘libdnf (openSUSE Leap 15.3; generic; Linux.x86_64)’”, ‘Installed Packages’, ‘’]. stderr = [‘allow_vendor_change is disabled. This option is currently not supported for downgrade and distro-sync commands’, ‘History database is not writable: SQLite error on “/usr/lib/sysimage/dnf/history.sqlite”: Executing an SQL statement failed: disk I/O error’, ‘Error: SQLite error on “/usr/lib/sysimage/dnf/history.sqlite”: Executing an SQL statement failed: disk I/O error’, ‘’]

And this the report
Label: ‘ROOT’ uuid: 4ac51b0f-afeb-4946-aad1-975a2a26c941
Total devices 1 FS bytes used 4.25GiB
devid 1 size 463.70GiB used 4.80GiB path /dev/sdb4

Label: ‘NAS09’ uuid: 1717423c-928e-4cc8-93ca-390915f49c15
Total devices 3 FS bytes used 2.19TiB
devid 1 size 931.51GiB used 748.50GiB path /dev/sdc
devid 2 size 931.51GiB used 748.50GiB path /dev/sda
devid 3 size 1.82TiB used 748.50GiB path /dev/sdd

Label: ‘NAS08’ uuid: 23a151ca-e501-47e0-81e9-91a80291b0e1
Total devices 4 FS bytes used 811.13GiB
devid 1 size 465.76GiB used 204.03GiB path /dev/sde
devid 2 size 465.76GiB used 204.03GiB path /dev/sdf
devid 3 size 465.76GiB used 204.00GiB path /dev/sdh
devid 4 size 465.76GiB used 204.00GiB path /dev/sdg

@tonyhr0x Hello again, and thanks for the added info. Should help folks chip in with more assistance.

That is actually the kernel version. But it’s enough to know roughly the underlying base OS version. It comes from the following command:

rleap15-3:~ # uname -a
Linux rleap15-3 5.3.18-150300.59.106-default #1 SMP Mon Dec 12 13:16:24 UTC 2022 (774239c) x86_64 x86_64 x86_64 GNU/Linux

The Rockstor version is displayed in the very top-right of the Web-UI:

So there it’s 4.5.8-0 our latest testing channel version:

Plus we have:

Which suggests you are on what is now an End Of Life (EOL) base OS (openSUSE Leap) version.

As to the cause of the problems you are having, the following is suspicious:

Rather suggests that your system disk is not well. And has likely gone read-only. Btrfs will often force a pool into a read-only state if problems are found. Or this may be a read error from the disk itself.

This command will tell us if btrfs itself has seen disk errors under the “/” mount which is “ROOT” labeled/name pool:

btrfs dev stats /

The output should be something like:

[/dev/sda4].write_io_errs    0
[/dev/sda4].read_io_errs     0
[/dev/sda4].flush_io_errs    0
[/dev/sda4].corruption_errs  0
[/dev/sda4].generation_errs  0

But your system drive is different as here myine is sda but you get the idea.

Another command to try and get some info here is this one:

rleap15-3:~ # mount | grep " / "
/dev/sda4 on / type btrfs (rw,noatime,space_cache,subvolid=258,subvol=/@/.snapshots/1/snapshot)

Note on my example system the output of that mount command then displayed rw this is what is expected.

Just trying to get more info on what may have happened here.

Upgrading from 15.3 to 15.4 in-situation is possible but not if your system disk is poorly.

Lets see if those commands have anything to tell us.

Hope that helps. If only to help others here chip-in with what may have happened.


Hi @phillxnet tnx for your advice

for btrfs dev stats /
[/dev/sdb4].write_io_errs 0
[/dev/sdb4].read_io_errs 0
[/dev/sdb4].flush_io_errs 0
[/dev/sdb4].corruption_errs 0
[/dev/sdb4].generation_errs 0

In Pools,

In shares

Can I copy their contents from system shell to another disk or pool?

@tonyhr0x Hello again.

So it’s good there are no btrfs errors indicated for the ROOT pool via the “btrfs dev stats /” command.

Your Pool detail view of Pool labeled NAS08 is way off.

All disks are indicated as detached. That means they are not known to be “attached” i.e. From Rockstor’s perspective what it last new of them is now no-longer. Their last identity as it were is no longer found on the system we given the name detached to all drives that have gone missing.

Hence share “BACKUP08” last know on the NAS08 pool (with all missing devices) not having a mount.

Yes we have your last report of an overall btrfs status here including NAS08 of:

Note in your last post we are missing the output from the following request:

That is important as if your system drive has gone ro we need to know, to diagnose what may have happened here. The Web-UI depends on the system drive being writable to update it’s state of the system and display it accordingly. And to see if the potential base OS update is an option.

If you can successfully create a mount of a pool (btrfs vol) or share (btrfs subvol) they you can of course do all linux command line possible things with the accessible data. Rockstor by default mount the pool in it’s entirely at /mnt2/pool-label-here and the individual subvols at the same ‘level’ under their own mounts at /mnt2/share-name-here. Making these same mounts by-hand will not interfere with that Rockstor tries to do anyway. And it may even given info that is otherwise only available in the rockstor log when it tries to do the say.

But all in you do need to establish if you root is read-only, as per the command suggested. Otherwise you will still encounter a whole slew of issues, both in the Web-UI and at the command line.

Do be sure to check logs via say:

less /opt/rockstor/var/log/rockstor.log

and the general system log via it’s interpreter:


Current summay is Rockstor has all member disks of NAS08 as detached but command line does not see this (at least any longer). But again if we have a read only system all bets are off as the database summarising system state cannot be written to.

Have you tried reboot this system, do you have any more info as to state changes of any type that has lead to the situation you are in currently? All helps with trying to fathom the system state currently and what may have lead to it.

Also have your tried rebooting this system if such a thing is practical. But lest see that mount command output first maybe.

Hope that helps.


Hi, again, @phillxnet
So, i can’t execute this command
" # mount | grep " / "
/dev/sda4 on / type btrfs (rw,noatime,space_cache,subvolid=258,subvol=/@/.snapshots/1/snapshot)"

this is the result

" bash: syntax error near unexpected token `(’ "

if I send the command journalctl

I found this in the report.

I’m sorry to waste your time but I really need to recover the files

@tonyhr0x Hello again.

Remember that in my instructions earlier I pasted both the command and the expected output, you are blindly cutting and pasting the entire thing, hence the result. We are currently looking for what is going wrong with your system when previously it was not. There does appear to be clues in the journalctl output but lest also clear up the root mount issue. You need to also do your own research on what is going on here. We cannot promise any level of technical support (at least yet). We are a DIY NAS project, that means there is an assumption of some technical expertise, and when it breaks you either have, on hand, your own expertise, or have available some. Pasting extremely well know output of a command in, as the command is an indication that I have miss read your own expertise here. My apologies.

What is the output of:

mount | grep " / "

Note that the previous commands leading text indicated the OS version via my use of that to designate the base OS (Leap 15.3) and the directory “~” that of the users ‘home’, and the user “#” indicating a root user. This tells folks more than a command but in this case was potentially misleading.

The output should be akin to what I pasted before.

I as the main developer and maintainer of The Rockstor Project do have to take care with my time spend here helping folks, but it can be rewarding and help to keep me in-touch with dificulties folks are facing with what the development team are producing. What we are trying to do here in this thread is help to find where you issues lie. And this mount command is to establish the state of the root mount. It is also directed to all others reading in an attempt help with diagnostic method. Your Web-UI state seems to be out-of-sync with the disk state, hence checking if your “ROOT” labeled pool, the system pool, has gone read-only. We have already seen there are not errors recorded by btrfs but our concern here is to try and track down why there is this out-of-sync situation. Getting your data back is strictly in the realm of your backup provisions. But if your system drive is duff, then it will do little good to simply do a fresh install onto the same duff system drive. Hence the attempted check.

You are not wasting my time, that would be my area of responsibility :slight_smile: . And I’d like to help where I can, but I do have to focus on the Rockstor elements here. You log entry indicates some serious compatibility issues between the kernel (ACPI subsystem) and your hardware. You need to look these up. Also remember you are running a now EOL OS, hence the work here to identify the root cause. If you system drive is duff, you can re-install a fresh Rockstor instance (all data drives disconnected just in case) and then power-off, reconnect all data drives, and the import the pools into the new system. But if the system drive has issues then that would be a dead-end (or a pending one).

You may also want to look-up how to get smart info form the command line, if the Rocksor Web-UI is failing you as it may be trying to run on broken legs.
Also look-up those red errors in your logs. These are all linux central issues and nothing Rockstor related other than we run on a base linux OS. But you are seeing BIOS compatibility error in ACPI. That’s not normal, and likely not healthy. Ergo using a newer kernel, and addressing your base OS EOL scenario also. But you do need to work at interpreting the command offered here. We teach here when we can but it’s always difficult to assess folks knowledge. My apologies again for pitching things wrongly. Incidentally we are working, via such interactions as this, on getting a collection of commands that folks can run, hopefully in an automated sense, that can help with assessing/diagnosing what is going wrong with someones system. It can also help to have chronological info on what lead to a failure.

Thus far here you have not done all that has been suggested. This can help, if not in any other way than to avoid demotivating folks trying to help.

I would ask if others here on the forum could review what has been examined thus far here as I’m a little pressed time-wise with getting our next stable release ready and organising our new fiscal setup so that we can in-time stand up commercial support options for folks in a rush/corner as it were.

Do go over what may have been missed to date in this exchange to see if there is still something requested and not tried/supplied. Diagnosis is tricky at the best of time.

From what we have to date found out:

Suggests a failed system disk (the disk I/O error stands for Input/Output = can’t read/write). This is good in a way, as it’s not your data disks. Assuming you haven’t used the ROOT pool for any non-replaceable data! Which brings us back to the mount command request. All drives fail at some point. And again, look to retrieving SMART data via the command line for that drive. Running on broken legs (ro, or broken ROOT pool) means the command line may be your only recourse. Plus there is always the re-install of a fresh non EOL rockstor instance on a fresh know-good system drive if you do need this assistance. Just take extreme care with not overwriting any data pool member if you have not back-ups.

Hope that helps.


Hi @phillxnet
tnx for your time :smiley:

I think I fixed it.
In the disk list, where the Smart function was missing, I clicked on RESCAN, and it rebuilt the whole disk index. Now I have everything online again.

Thank you very much for the time you dedicated to me, and sorry for the mistakes I made