UUID issue & Cannot access Rockstor Share from any Win 11

I have installed ProxMox VE and then installed Rockstor as a VM.

I have set up a share. I can see the network folder in windows. When I click on it, I then get a new folder with the name of my share. WHen clicking again on this, I get asked for credentials despite enabling Guest access and Browsing.

I have checked my credentials in Win 11 and deleted them in case of any legacy issues.

I have tried to change permissions to 777 in the Access control Tab of the Share Menu.

I also have another issue where Rockstor is reporting there are non-unique UUID (and reports VMware rather than ProxMox VE) on its swap, data and OS partitions. This is on a NVME M2 drive that also hosts 5 other VMs)

The share is on a Crucial 2 TB SSD and there are no UUID problems reported on this share. The entire SSD is passed through to Rockstor.

Web-UI screenshot

Error Traceback provided on the Web-UI

Traceback (most recent call last): File "/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py", line 41, in _handle_exception yield File "/opt/rockstor/src/rockstor/storageadmin/views/share_acl.py", line 60, in post chown(mnt_pt, options["owner"], options["group"], options["orecursive"]) File "/opt/rockstor/src/rockstor/system/acl.py", line 34, in chown return run_command(cmd) ^^^^^^^^^^^^^^^^ File "/opt/rockstor/src/rockstor/system/osi.py", line 263, in run_command raise CommandException(cmd, out, err, rc) system.exceptions.CommandException: Error running a command. cmd = /usr/bin/chown -R root:root /mnt2/Rocky. rc = 1. stdout = ['']. stderr = ["/usr/bin/chown: changing ownership of '/mnt2/Rocky/containers/de1746be5ebbcfb1fd54b3047708fd2f59d0ac7e4477cba569053247f04690bc/mounts/secrets/suse_745d038b37ff083570b5bddc84209bab0e7743e54fc0502fb2708caeba7e4636_credentials.d': Read-only file system", "/usr/bin/chown: changing ownership of '/mnt2/Rocky/containers/de1746be5ebbcfb1fd54b3047708fd2f59d0ac7e4477cba569053247f04690bc/mounts/secrets': Read-only file system", '']

@Rocky12345 , welcome to the Rockstor community. What version of Rockstor are your running (top right of the WebUI?

As you can see on recent forum posts, the latest testing versions had/have some issues around shares, etc. so want to ensure we’re looking at the right context.

Have you passed through the virtual disk as a “unique” device? I see some posts on here from a long time ago around virtual disk serials. I am not sure whether these still apply or not as I am not a proxmox user, e.g.:

A quick browse on proxmox forums, it seems to me that the passthrough feature for virtual disks has still not be implemented (for physical disk passthroughs, that has been around for quite some time), so the forum post above might still be applicable.

2 Likes

Hi Hooverdan…

The Rockstor OS, Data and swap partitions are on a NVME drive as a Virtual machine. The NVME is under the control of Prox Mox VE (there are 6 other virtual machines running on ProxMox VE)

I fitted a Crucial 2TB SSD and this is the only thing passed through whole to Rockstor and this is now a full Samba share.

Its Rockstor 4.5.8-0 running on OpenSUSE Leap Linux 5.14.21-150100.24.46-default.

I have incidentally installed a 2nd Instance of Rockstor and the UUID problem no longer exists.

So I disconnected the 2TB SSD pass through from the old Rockstor VM to the new Rockstor VM. Lo and behold, I now have a visible win 11 share on the new Rockstor.

The old Rockstor VM has now been wiped off Prox Mox VE.

However, I now see two instances of two network shares, one called Rockstor and the other TestRocky. The former is unacccessible but the latter is accessible and is the export share and that then reveals a network folder called Rockstor.

SO what to do about the inaccessible Rockstor network location even though I have a rockstor folder within TestRocky?

Thanks for the version info and clarification of your pass-through set up. Also, good to hear that the UUID issue went away, even though it’s strange that it took two VM tries to address it.

The two network shares you’re seeing, means you see them from a Windows PC connecting to them, or you see them inside Rockstor?

Just want to separate the Rockstor shares from the Samba exports, since that has different implications.

2 Likes

Right before I reply fully, a bit of background to my endeavour:

I originally had 4 physical NASes, that are Seagate Centrals, named as Datatank1 to Datatank4.

They are so old they are now EOL and only supports SMB1 which has more security hole than security wall. :slight_smile:

In addition, datatank 1 and datatank 2 have now both failed. So rather than replace/repair these two, I built a new PC with ProxMox on it with the intention of also decommissioning Datatank 3 and Datatank 4 so I can finally dump SMB1 and have 4 new NASes running on SMBDirect with support for SMB2 and SMB3.

This new PC has a single M2 NVME drive that has ProxMox VE on, and has 6 VMs on it, that being DietPi, Ubuntu, OpenMediaVault, TrueNas, Rockstor and Xigmanas.

My thinking was rather than have 4 identical NASes all with the same foibles, bugs, design flaws etc, security holes, I thought it would be better to have 4 different types of NASes and store the same data across all 4 NASes.

Furthermore, if ProxMox VE dies, I can use a Ubuntu Live DVD on the host PC and have direct access to those 4 SSDs plugged into said motherboard.

Furthermore, data transfers between the 4 NAS VM’s within ProxMox VE will be at 10 Gbit/s as there is no need for data to physically leave the host PC as it has its own Virtual 10 Gbit/s switch within and the PC only has a physical 1 Gbit/s NIC in it.

There are 4 off Crucial 2TB SSDs physically plugged into the motherboard’s SATA ports, one each is directly passed through to each of the 4 NASes, so 1 to OpenMediaVault, 1 to TrueNas, 1 to Rockstor and 1 to Xigmanas.

There is also a 5th 2TB Crucial SSD which connects via USB-C and this is the backup drive for all of the 6 VMs. The idea being if the Host PC or ProxMox VE fails, I can rebuild/repair/replace the Host PC or ProxMoxVE, restore the VM’s from that 5th SSD and pass through those 4 SSDs again and get all 4 NASes back online.

Now, for the images:

This is what I see in Windows:

Clearly, I just have one each of DataTank4, Datatank4, OpenMediaVault, TrueNAS, HP_printer, and I see ROCKSTOR and TESTROCKY.

The latter is the correct and fully working SAMBA share.

Trying to access ROCKSTOR gives me a rotating blue circle and reports network path not found after several minutes. I would like to sort this out or it will be confusing to anyone using any machine to access Rockstor.

Here with two more screenshots of what I see in Rockstor:

Xigmanas is not set up yet, that is my next job and then I can start transferring over datatank3 and Datatank4.

so how do I deal with the ROCKSTOR icon?

Stephen

>>> Also, good to hear that the UUID issue went away, even though it’s strange that it took two VM tries to address it.

The original Rockstor VM was using VIRTIO SCSI Single.

The 2nd Rockstor was using VIRTIO SCSI.

The former basically has a virtualised SCSI controller for EACH virtualised SCSI disk whereas the latter uses 1 virtualised SCSI controller for ALL virtualised SCSI disks.

So that simple thing appears to have resolved the UUID issue.

It was in fact my 3rd attempt. what seems to have made a difference is to install a new VM of Rockstor along side the old Rockstor, then delete the old Rockstor. Previously I had deleted the old Rockstor before installing the new Rockstor, so something must be persisting somewhere.

Even though all the Crucial SSDs are SATA interface, they all appear as SCSI1, SCSI2, SCSI3, SCSI4 and SCSI5 in ProxMox VE.

The M2 NVME is SCSI0.

So with SSDs being capable of 4.8 Gbit/s (to give the 600 MB/s over SATA) it should hopefully be fast between the 4 virtualised NASes given the virtualised 10 GBit/s network bridge within ProxMox VE.

Interesting setup, indeed!
I am surprised that you see ROCKSTOR as a Network appliance, since you really only have set up the TestRocky as your system that should be visible under the Network tab.
On my setup, I don’t see any of the Samba Shares under the network, but the same behavior you have described when accessing TestRocky. So, I’m a bit stumped about the phantom instance showing up there, that should not happen. Are you using the ‘wsdd’ Rockon to make the Rockstor host visible on the network (which is what I do, and there it’s relevant to have the host name definition match the Rockstor appliance host name)?

3 Likes

Yes I am using the WSDD rockon in Rockstor.

I had called the 2nd VM of Rockstor in ProxMox VE as TESTROCKY as the 1st VM was called ROCKSTOR and needed a way two distinguish the two VMs within ProxMox VE.

As I had wiped the 1st VM of Rockstor off the ProxMox VE after installing the 2nd VM of Rockstor. I renamed the TESTROCKY VM to ROCKSTOR from within ProxMox VE

Following your comment, I saw that the hostname in Rockstor was ROCKSTOR, but was TESTROCKY in WSDD.

I then uninstalled the WSDD rockon from the 2nd VM of Rockstor, and reinstalled WSDD but this time as ROCKSTOR set as the host name and workgroup set as WORKGROUP.

I now have solved that issue in the Network window. :slight_smile:

I also have now installed Xigmanas, and I found that also has to have a WSD service turned on before XIGMANAS appeared in Network Viewer.

So now I have 4 virtualised NASes on my network all with a 2 TB SSD each.

The 4 virtualised NASes should be fast running on a M2 NVME and the shares on SSDs instead of HDDs… :slight_smile:

The Ubuntu VM is only there for doing 10 Gbit/s transfers between the 4 NASes should I want to rsync all 4 NASes.

So I shall now speed test the 4 virtualised NASes from the Ubuntu VM and tweak them for speed.

They all have access to the same hardware, 8 GB each, a 4 core processor and a Crucial 2TB SSD each and 10 Gbit/s networking. So should be a fair comparison :slight_smile:

After all that, I shall migrate the Datatanks 3 & 4, decommission them and dump SMB1.

3 Likes

just in case anyone is curious:

herewith a screenshot of how everything looks in ProxMox VE: :smiley:

3 Likes

Glad its all running now! Thanks for sharing your setup, too