Hiya,
It’s nearly two years ago when I started my adventure with Rockstor.
I’ve been running Rockstor 3 since then and with great pleasure.
The hardware I used was OLD …very OLD …but I like using old hardware.
When Rockstor 4 was in beta I decided to wait. Honestly I totally forgot rockstor since everything was running smooth.
But recently I was donated some hardware …also old but newer than my current rockstor hardware.
So decided for the ‘leap’
The hardware; Pentium i3 with 6gb ram.
The hypervisor: initially Proxmox 6 but upgraded to version 7.
The software: the Rockstor 4 iso I found in this forum.
End result: perfectly running Rockstor 4.
Everything went really smooth. Even assigning physical disks to the Rockstor virtual machine.
And as a bonus everything is much faster now than the older hardware.
Hey Phill,
I have to wait for the harddisks I had ordered so I intended to do a new installation after all.
Didn’t look at the download page. Thanks for pointing me out.
Spurred on by @dont success with Proxmox and Rockstor, I’ve just tried this myself with great results. Never used Proxmox before so was a journey of discovery and learning
Question to @dont - do you see much performance hit by virtualising compared with bare metal?
Also, the only way I could find (via web search) to add a new drive to the Proxmox environment as storage was via cli. Is there a way to do it in the GUI?
@GeoffA: As for a performance hit; Rockstor 3 -> Atom board with 4gb RAM and 4x750gb Samsung disks. Performance was (is) ok.
Comparing bare metal to virtual with my configurations is like pears with apples.
However I can tell you that the webgui is responding much faster with the VM than the baremetal.
As soon as I will receive my harddisks I will do some proper disk i/o test.
Well I have done some research into Proxmox and ended up repurposing a dual core AMD64 box with 8GB RAM to see what the deal is with this hypervisor.
I have used Virtualbox on my laptop many times before with good results, so was interested to see how KVM behaves.
It was a fun learning journey, and that old-ish dual core box is now running 3 VMs - Rockstor 4 and 2 Debian instances as a media server and utility server. This is all just a bit of fun and learning for me, but so far the stability and ease of system management is very comforting.
Performance is more than acceptable for this hardware.
That’s a respectable number, assuming a 1GB network. I get similar numbers, but generally just use the network graph on the dashboard as an indicator. I also look at the disk activity widget which when expanded shows disk throughput on a per-second basis - that’s good enough for me and I don’t tend to look any deeper or do specific benchmarking.
As long as I get acceptable speeds in real use (ie uploading/downloading files to/from NAS from laptop etc), I’m happy.
EDIT: Just an aside, and more of a rhetorical question, but why does the dashboard network widget always default to ‘lo’? I’d like it to default to the last one viewed which in my case is ‘eth0’.
I was planning to do some testing between several setups of disks. Thus via a qemm virtual disk, attaching a hdd directly to the vm, etc. But everything is working perfect right now so no need for testing.
Rockstor is running of a typical qemu virtual disk.
I did a passthrough of a whole 2TB spinning disk for data and this worked fine. I tried this live while the VM was running, and it showed up in the Rockstor GUI disks section after a page refresh without having to restart the VM.
I could only create the passthrough using the CLI, could not find a way via the promox GUI (unless I am missing something there). Mind you its only a couple of CLI commands to passthrough the disk.
Just last weekend, I have just read more about of BTRFS and ROCKSTOR, and since I was given some old supermicro motherboard, I decided to give a try and ditch ext4 and mdadm and debian in my old NAS altogether so here I am I have just installed Proxmox 7.2.3 and RockStor 4.1 on it, as the people here I did passthrought with a couple of disks, and everything seems to be working quite nicely, it is really fast and intuitive so I am quite happy with it
Just I found a couple of things, which I am not sure if I need to configure somehow:
SMART information is not being shown (maybe not possible since I setup the disks as pass through
It’s also odd than in the Dasboard the Top Shares By Usage are marking 0% and 0 bytes used, and I know that it should be like 2TB in use in various locations. I have 7 shares? are too many?
Thanks very much for both Rockstor and for the help
Hello @khamon and welcome to the world of Rockstor
Great that you have got Rockstor running on Proxmox - I have a similar setup.
Re the SMART question, you will need to look at the SMART info in Proxmox for those pass-thru disks as it does not show in Rockstor itself. In proxmox, just click on the ‘Disks’ menu item of the node you have the Rockstor VM.
As for the dashboard query - I’m not sure I understand I’m afraid. Sometimes the dashboard stats only update when you refresh the page and this has caught me out before. I could be be barking up the wrong tree of course…
Hi,
I don’t have passthrough of disks but SMART in Rockstor is not supported. I have WD disks.
Like @GeoffA says you can see the smart details thru Proxmox.
As for the Rockstor dashboard; top shares looks normal to me.
well I could expect the SMART part, it seems that maybe it has its logic and honestly I do not mind to check the SMART in either the PROXMOX console or webgui…
But about the shares well it looks like I have something wrong there, I had waited for refresh, even reboot, or even reinstall of RockStor and rediscover of the shares but still… this is what I have:
Yeah that’s very odd indeed. As a test, are you able to SSH in to the Rockstor VM and check the contents of the shares where you say there should be data. They will be found under /mnt2 from the root dir.
Unfortunately I cannot help/suggest further with that webgui anomoly - hopefully someone wiser than me will be able to help. @phillxnet@Flox@Hooverdan spring to mind
I apologize in advance for the extremely brief answer as I’m constantly out of time lately…
I suspect your oddity in report of share size in the webUI is because you have quotas disabled on this pool. Quotas are indeed “needed” to have a used space report.
Also note that du is not accurate for space use in Btrfs… I would refer to btrfs fi usage /mnt2/pool-name-here instead.
Hope this helps, and sorry again for being so brief. I hope at least it gave you some pointers.