NFS Advanced Edit Incorrectly Checks Existence of Shares

Using NFS Advanced Edit to create an export of a subdirectory within a Share fails with the message “share matching query does not exist”, because the check is performed incorrectly.

Example: the following entry

/export/my_home/Documents *(rw,async,insecure)

will fail because the Share name is “my_home” but the code incorrectly looks for a Share named “Documents”.

Manual edits of etc/exports will eventually get overwritten, so that’s not workable. A workaround to get past this bug is to create a dummy share named “Documents”, which lets this line pass the check. NFS will then export “my_home/Documents”, not the dummy share. The “/etc/exports” file then includes those exports though they do not appear on the NFS page. I haven’t tested to confirm that the Advanced Edits will be preserved after further additions or deletions of NFS exports through the UI.

This bug relates to issue #1078. Bug was found in version 3.9.1 and confirmed to still be present in version 4.5.6-0.

2 Likes

Hi @Walt,

Thank you for bringing this old bug back into light; I personally was not aware of it. For reference, here’s the link to it:

As a workaround, I would recommend the one listed by Suman on that issue: using /etc/exports.d/*.exports

If I understand your example correctly, you have a Rockstor share named my_home that contains a subfolder named Documents and you would like to export this Documents subfolder. With this example in mind I think the following should work without needing to create a dummy share named Documents; it should also not be overwritten at a later time:

mkdir -p /etc/exports.d
echo "/mnt2/my_home/Documents/ *(rw,async,insecure)" >> /etc/exports.d/rockstor.exports
systemctl restart nfsserver

You should now be able to see it in your list of exports: showmount -e localhost

I do agree with you that this is a bug, though, so we should work to fix the issue you found.

2 Likes

Thanks for the recommendation, Flox. I tested it and found a problem. My parent folder on the Rockstor server (my_home) contains these five folders and nothing else:

  1. Documents
  2. Images
  3. Pictures
  4. Thunderbird
  5. .Trash-1000

my_home is exported in the usual way from the Rockstor UI. The line in /etc/exports is
/export/my_home picasso(rw,async,insecure)

When I mount this from computer picasso, all five folders show up.

Then I created the directory and file (/etc/exports.d/rockstor.exports) with these two lines as content:

/mnt2/my_home/Documents/ picasso(rw,async,insecure)
/mnt2/my_home/Pictures/ picasso(rw,async,insecure)

Then restarted the NFS server. The two subfolders, Pictures and Documents, both mounted fine, so thumbs up for that. However I later noticed that All the folders except Documents and Pictures disappeared from the my_home mount, it’s like they did’t exist at all, however I confirmed through the console on the Rockstor server that they are present. They are not visible in Dolphin and also do not appear in the terminal when I type “ls -la”. I checked attributes, ownerships and permissions and it’s not that. I tried rebooting my client computer and mounting only the my_home export and that did not resolve it either. I then renamed /etc/exports.d/rockstor.exports to something entirely different and restarted the NFS server again. Then once again mounted the my_home export, and now the five folders are visible again.

I really have no clue why this happens. I’ve provided you with details should you want to play around with it further. Anyway, my workaround, though less elegant, doesn’t seem to have this issue, so I guess that’s what I’ll use for now.

1 Like

Thanks for testing that!

To make sure I understand correctly, what are the NFS exports you need to have in the end?

Based on your findings, there seems to be some conflicts when exporting the parent (through your export of my_home, and then the additional exports of Documents and Pictures. If you just need Documents and Pictures to be exported, I would delete the export of my_home, for instance.

My apologies if that’s what you did and I misunderstood.

Yes, quite right about the conflicts. I have no inkling of how the addition of that extra directory and file affects the NFS server, so I’ll leave sorting out that mystery to others.

In my setup, the my_home export is the more important one. For example, Thunderbird saves its profiles there, so I can open Thunderbird on any PC and have access to everything (a bit unusual I know, most people now use IMAP and keep their emails on the web). I’m setting up other softwares (e.g. FreeCAD, GIMP, Photoshop, etc) similarly, so I can jump between PCs and it’s all there, seamless. (Three desktops, two laptops. Plus the Rockstor server. Plus a raspberry pi … Okay I confess – fully nerded-out. It’s gotta be something in this wine.)

my_home will eventually get loaded up with all my data and folders I currently have distributed on multiple PCs. Exporting Pictures and Documents is mostly a convenience. My Linux PC creates special colored folders for Pictures and Documents, so it’s just nice to have them vector directly into Rockstor. It helps police my usage so such files stay in the one repository and not end up scattered in multiple places again.

Anyway, workarounds become moot once that bug is fixed (#1078). Looks like an easy fix.

1 Like

Thanks for the additional information…

I’m curious what would happen if you delete all NFS exports from Rockstor, and then define them all manually in /etc/exports.d/rockstor.exports… The reason I’m wondering this is the following: when creating an NFS export, Rockstor first creates a bind mount at /exports/<share_name> and then write that path in /etc/exports; this is why it shows as /exports/<share_name> there. I’m not up-to-speed on the NFS history, but it seems to be more of a “legacy” way of doing it as we can also just export the Share directly. Using a Rockstor-managed NFS export + the /etc/exports.d/rockstor.exports file of subfolders of the share exported by Rockstor that was bind-mounted may be the reason for conflicts… I have not tested it so I can’t say for sure, but that’s my best guess at the moment.

Famous last words :stuck_out_tongue:.
It’s not that simple actually as I think it depends on what we want to prioritize. Indeed, the reason you saw that error to begin with is that Rockstor is doing a few checks to ensure that the path you want to export exists, and that it is properly mounted on the system. We can remove those checks and let users freely define paths, but then we would most likely have to remove those sanity checks. One could argue that in the “Advanced Edits” dialog, it’s still appropriate, though… I personally may lean towards the latter.

1 Like

Yes, mounting all exports manually in /etc/exports.d/rockstor.exports might resolve it. Alternately, if the extra subfolder exports in /etc/exports.d/rockstor.exports were written as /export/my_home/... rather than as /mnt2/my_home/... that might dodge the conflict. Anyway, I have it working now so I’m not going to do anything more with this.

I agree with you about doing checks. IMHO, doing little or no checking of Advanced Edit input is best. Doing extensive checks means you’d have to anticipate every possible thing a user might want to do, or to limit what they might be able to do, neither of which seems reasonable. Just add a bit of explanatory text on the page such as, “Lines entered here will be added to ‘/etc/exports’ without being checked for correctness.”

3 Likes

I’ve learned more about that bug in NFS.

As mentioned above, I used a workaround to add two lines to /etc/exports via the NFS Advanced Edit button in Rockstor. Both lines began with “/export/…”. They did not cause the problem of shading or occluding some of the directory contents. (See the posts above for all the details.)

Recently I tried exporting a third subdirectory using that same method and that bug returned, along with an additional corruption of the filesystem. Therefore, the workaround that I described above IS NOT SAFE, it simply happened to avoid encountering problems by chance.

The additional corruption of the filesystem is explained in detail in NFS Causes Corruption of Filesystem.

In conclusion, none of the workarounds as described in this thread are safe from the NFS bug. The use of any of them is not recommended.

No experience with NFS, but was curious about this. While looking around, I found this (part of the NFS documentation when ZFS is the underlying system). But could it be related to which version is “forced” when it’s not specified? (Found it here: https://docs.oracle.com/cd/E37831_01/html/E52872/shares__filesystem_namespace___protocol_access_to_mountpoints_.html)

Namespace NFSv2 / NFSv3

Under NFS, each filesystem is a unique export made visible via the MOUNT protocol. NFSv2 and NFSv3 have no way to traverse nested filesystems, and each filesystem must be accessed by its full path. While nested mountpoints are still functional, attempts to cross a nested mountpoint will result in an empty directory on the client. While this can be mitigated through the use of automount mounts, transparent support of nested mountpoints in a dynamic environment requires NFSv4.

Namespace NFSv4

NFSv4 has several improvements over NFSv3 when dealing with mountpoints. First is that parent directories can be mounted, even if there is no share available at that point in the hierarchy. For example, if /export/home was shared, it is possible to mount /export on the client and traverse into the actual exports transparently. More significantly, some NFSv4 clients (including Linux) support automatic client-side mounts, sometimes referred to as “mirror mounts”. With such a client, when a user traverses a mountpoint, the child filesystem is automatically mounted at the appropriate local mountpoint, and torn down when the filesystem is unmounted on the client. From the server’s perspective, these are separate mount requests, but they are stitched together onto the client to form a seamless filesystem namespace.

If I see this correctly, the protocol is not part of the export when Rockstor creates one. I don’t know whether in your workarounds specified it or not. And … whether it even matters here.

I saw some IBM documentation discouraging nested exports since it could cause “serious data consistency” problems (but that was also rather old).

And all of the above might not be relevant here.

2 Likes

Hey, thanks for the input, Hooverdan.

While researching this bug I looked into the “hide/nohide” export options. Some similarity of concept, but it actually applies to a different situation. Say the server has two drives and drive 1 is at the top of the filesystem tree and drive 2 is mounted in a subdirectory below that. If the whole filesystem (from 1) is exported via NFS, there is a question whether the user wants to export both drive 1 and drive 2, or just drive 1. That’s what the “hide/nohide” option controls. (I know I’m conflating drives with filesystems, but this is just to keep the explanation short and simple.)

In my Rockstor system, I don’t mount any shares inside of any other shares, I don’t even know if Rockstor can do that. So this switch would not have any effect.

Buuut… your suggestion provides another possible workaround for the bug seen here: NFS defaults to V4. Perhaps by explicitly switching to V3, the bug could be avoided? If bugs were meteor showers, the optimal situation would be to eliminate meteor showers, but if that isn’t happening, then an alternate solution is to know where to stand.

2 Likes

meteor showers, interesting analogy :slight_smile:
if in the exports file you manually force the version with nfsvers=3 you can maybe see whether that prevents it from happening? I assume, your client side can handle either of those?

2 Likes