the export directory (/export/shared2)
however /export/shared2 is not visible in df -h, but its visible in df -ha, which means that it is mounted as a dummy file system
Question. why is a actual physical hard disk getting mounted as a virtual file system
The mount is created on the client machine.
I restart the NAS. The exported directory is no longer accessible from the server
I do an umount and mount the export directory. The directory is now accessible from the server
This is a result of native de-duping in df. It’s only showing a single entry for any given device id.
You can verify this by checking the device ID of the mountpoint. For example on my system:
There is no cause for concern here, this is by design.
It’s also something Rockstor is not responsible for, it is the native function of coreutils for CentOS.
I cannot imagine why you would need to see the exports in df anyway, as this data will be available when looking at the source mountpoint.
. But the problem I am facing is the export directory is not available after NAS box is rebooted. I need to unmount and then mount the directory to be accessible by the clients.
@ashima
Ah, I had not realized. Reminder to self - read whole message.
You need to remount the directory on the Rockstor system, correct?
If so, this sounds (to me) most likely the exports are attempting to mount prior to the NFS driver running, or perhaps before the BTRFS subvol that it mounts from is mounted.
Otherwise, this might be a bit beyond my grasp.
What version of Rockstor are you running?
Can you advise whether or not the export directories are detected as mountpoints at all after a reboot?
mountpoint /export/<name>
And whether they’re listed in mount at all?
mount|grep export
And whether nfs is exporting them at the time
showmount -e
And whether your external system can see those exports as well
showmount -e <IP of Rockstor>
Finally, provide the contents of /lib/systemd/system/nfs-server.service for examination (in case it differs from mine)
That might provide a little more insight into the issue.
It’s not something I’m having difficulty with on my own Rockstor system, my exports become available immediately after all services finish running
Thank you for replying. The NAS box is at remote location. It’ll take me some time to share the output of above commands with you. I’ll post it once I am at the location.
Hmmm, googling the error ‘Stale file handle nfs’ shows a lot of results, with a lot of different causes.
From what I can tell, the most likely cause is that something has changed on the mountpoint /shared2 since the NFS share was initially mounted on the client and before the NFS service has started on the server again.
Unfortunately, I can’t see an easy fix for this, beyond what you’re already doing (stop nfs, re-mount export, start nfs)
Perhaps you need a custom systemd unit to perform the above tasks, after NFS has started?
@Haioken, The problem boils down to restarting the NFS service. After rebooting if I restart NFS the share is again available to clients. I was planning to write down a simple script which restart NFS after boot but was unsuccessful. Any Pointer ?