When I restart the NAS, the export directory is not accessible to the servers on which it is mounted

Hello Everyone,

  I am new to rockstor. 

The commands n the results are as follows:
cat /etc/exports
/export/shared2/ 192.168.1.*(rw,no_root_squash,async,insecure)

df -h | grep shared2
/dev/sda 9.1T 1.1T 8.1T 12% /mnt2/shared2

df -ah | grep shared2
_/dev/sda 9.1T 1.1T 8.1T 12% /mnt2/shared
/dev/sda 9.1T 1.1T 8.1T 12% /export/shared _

the export directory (/export/shared2)
however /export/shared2 is not visible in df -h, but its visible in df -ha, which means that it is mounted as a dummy file system

Question. why is a actual physical hard disk getting mounted as a virtual file system

The mount is created on the client machine.

I restart the NAS. The exported directory is no longer accessible from the server
I do an umount and mount the export directory. The directory is now accessible from the server

Any Pointers,
Regards,
Ashima

@ashima

This is a result of native de-duping in df. It’s only showing a single entry for any given device id.
You can verify this by checking the device ID of the mountpoint. For example on my system:

[ root@rockout (pass 0s) ~ ]# stat -c %d /mnt2/media/
135
[ root@rockout (pass 0s) ~ ]# stat -c %d /export/media/
135

There is no cause for concern here, this is by design.
It’s also something Rockstor is not responsible for, it is the native function of coreutils for CentOS.

I cannot imagine why you would need to see the exports in df anyway, as this data will be available when looking at the source mountpoint.

Thank you Haioken for making the concept clear

. But the problem I am facing is the export directory is not available after NAS box is rebooted. I need to unmount and then mount the directory to be accessible by the clients.

Any help,
Regards,
Ashima

@ashima
Ah, I had not realized. Reminder to self - read whole message.
You need to remount the directory on the Rockstor system, correct?
If so, this sounds (to me) most likely the exports are attempting to mount prior to the NFS driver running, or perhaps before the BTRFS subvol that it mounts from is mounted.
Otherwise, this might be a bit beyond my grasp.

What version of Rockstor are you running?

Can you advise whether or not the export directories are detected as mountpoints at all after a reboot?

mountpoint /export/<name>

And whether they’re listed in mount at all?

mount|grep export

And whether nfs is exporting them at the time

showmount -e

And whether your external system can see those exports as well

showmount -e <IP of Rockstor>

Finally, provide the contents of /lib/systemd/system/nfs-server.service for examination (in case it differs from mine)

That might provide a little more insight into the issue.

It’s not something I’m having difficulty with on my own Rockstor system, my exports become available immediately after all services finish running

Thank you for replying. The NAS box is at remote location. It’ll take me some time to share the output of above commands with you. I’ll post it once I am at the location.

Thank you,
Regards,
Ashima

Hi,
@Haioken
Sorry for delay in replying. Here’s the output :

Shared directory not available after reboot.
I waited for a full 15 min. but still I get
-bash: cd: /shared2: Stale file handle

Output of commands:
mountpoint /export/shared2
/export/shared2 is a mountpoint

mount|grep export
/dev/sda on /export/shared2 type btrfs (rw,relatime,space_cache,subvolid=258,subvol=/shared2)

showmount -e
Export list for pronas:
/export/shared2 192.168.1.*

apart from these
lsblk
sda 8:0 0 9.1T 0 disk /mnt2/Volume1

df -h | grep shared2
/dev/sda 9.1T 1.1T 8.1T 12% /mnt2/shared2

again if I

  1. umount and mount of /export/shared2/ and
  2. then restart NFS it starts working
    the above sequence is important if I do 2 before 1 it does not work

Hmmm, googling the error ‘Stale file handle nfs’ shows a lot of results, with a lot of different causes.

From what I can tell, the most likely cause is that something has changed on the mountpoint /shared2 since the NFS share was initially mounted on the client and before the NFS service has started on the server again.

Unfortunately, I can’t see an easy fix for this, beyond what you’re already doing (stop nfs, re-mount export, start nfs)

Perhaps you need a custom systemd unit to perform the above tasks, after NFS has started?

Hello Everyone,

               @Haioken, The problem boils down to restarting the NFS service. After rebooting if I restart NFS the share is again available to clients. I was planning to write down a simple script which restart NFS after boot but was unsuccessful. Any Pointer ?

Regards,
Ashima

Im unsure exactly what point in the boot process this needs to be inserted unfortunately, however at a guess, I think /etc/rc.local should be OK.

#!/usr/bin/bash
systemctl stop nfs-server stop
umount -f /export/<SHARENAME>
systemctl start nfs-server
exportfs -a

This is untested, and contains no error handling, but should be a decent starting point.

First we stop the responsible service and unmount the stale share.
Then we start thebservice again, and re-export all shares.

This would be better as a systemd script, with a requirement:

[Unit]
Wants=nfs-server.service                                                                                                                                                  
After=nfs-server.service

Try rc.local first tho, as it’s much easier.

1 Like