First I wanted to check that it would all work before making case mods.
I used a USB Drive as the Operating system Drive. The USB drive was USB3 and the performance was above USB2.0 limitations, hence I thought it was a good fit.
My first stumbling block that I can across was the on-board LAN drivers. The installation ISO V3.9.1 did not have the drivers. However I wanted to get this box online so that I can update the installation packages and I tried the USB-LAN adaptor I had. This worked and once I had updated, the on-board LAN worked, hence good to go.
I took all the external walls off the case and removed all internal Bays by drilling out the mounting holes. I also had to Grind away some of the front case to fit the drive bay in. The next issue was to padd the gap from the bottom of the cage, so the drives will sit as high as possible. I used a old hard drive as it was the right height difference.
The Back of Drive cage had some plastic padding to hold a fan that interfeared with RAM, so I cut away a small corder to resolve this.
For the 6th Drive, I needed to make use of an eSATA port external to the case, Hence I had an adaptor that I had left over from a purchase that I made because I wanted the powered eSATA cable that came with it for a laptop.
It was similar to this image but only one port and had a SATA power connector.
The UPS I had hoocked up (before it failed) indicated the load was 110W.
It makes sense as the Hard Drives are 10W each (60W total), the Graphics card is up to 21W, which leaves about 30W for the Mainboard and CPU at idle.
I figured that older hardware would still be better than a new ATOM based system.
Also if I had used 2x 8TB NAS / Surveillance drives ,it would have costed me about AU$800 for them alone.
But I guess with newer hardware, the wattage could be half.
Thank you for the intel.
I have also used older hardware but I’m in a stage to collect new hardware to build an A+++ server.
My setup currently:
mobo: Gigabyte GA-D510UD (rev. 1.0)
CPU: Intel Atom Dual core 1.66ghz
RAM: 2x 1Gb DDR2. (a nice friend has 2x2Gb …I will collect if after this Covid lockdown).
HDD: 4x hotswap Samsung HD753LJ 750gb. These are old.
Power supply: external 120w to reduce noise level.
Everything neatly build into a Chenbro ES34169 server chassis.
I’ts running 56w idle. Way to much these days.
My new build should be running under 10w idle. I’ll keep you posted.
I’ve Experimated with OpenSUSE-Leap-15.2-1 installed on the SSD (SSD now attached on SATA0 Channel).
However Since the Rock-ons don’t do all that I want to happen on my box, I’ve experimented in another direction and used ProxMox as the Base Install (As I have had experience with this through my work).
Within a KVM I have installed Rockstor 15.2-1 onto the SSD using a 32GB qcow2 vdisk on the SSD and another 8GB qcow2 vdisk for Rock-ons with 2GB RAM. I’ve Passed through the 5 HDD’s also. I don’t get SMART or power options in Rockstor.
I do gain the benefit of ;
historical graphs of the VM performance / IO usage.
I’m able to install Rockstor for Scratch remotely.(don’t need to make Boot USB’s, etc.)
Install any other containers or KVM software in parallel (RAM permitting) Such as Windows.
Use of BTRFS as a storage location in ProxMox (via a NFS Map through Rockstor)
If you could dedicate and pass through a disk controller then you may get these as well.
Just a thought.
Also:
That may well be cutting it fine if you are also running Rock-ons. And under some pool repair scenarios the RAM requirements can also escalate. Also depends on the size and number of pools of course. You will have to let us know how this works out. There are a number of other posts on the forum of folks running Rockstor within Proxmox so they may also be worth a read as I know some passthrough methods were causing grief and I think in some sense there can be issues of kernel compatibility so make sure to keep the Proxmox as new as possible and ideally with a kernel newer than the one in the Rockstor instance. But I’m no expert on that front but worth looking into if you are depending on this arrangement. I personally only envisage Rockstor running on bare metal as it then definitely has direct access, hence the controller pass-through rather than the individual drives as well, but what ever work for you is great. But all additional levels of abstraction come with inevitable additional risks (or bugs).
Nice benefits though, we will have to look to some of these over time. But I’m keen on keeping a tight storage & services (Rock-ons) focus really as then we stick to tool for the job. Leaving other projects, such as Proxmox, to focus on their strengths such as VM management / hypervisor type stuff.
I spent some time Modifying this configuration again.
I salvaged a HP Z200 (i5 650 CPU) and Managed to obtain 12GB RAM with it. (Built in Graphics Saves power usage from a dedicated card is a Plus also. It also has iAMT.)
The Mod Case above is dropped. I hoped to be able to move the Motherboard between the boxes, however the case layouts from HP are not fexiable (CPU heatsink mount location, IO shield fixed) and I needed to stick with the Case it came with. The Z200 case would not be an easy mod, less layers and more fixed pannels.
The HDD Cage also caused a delay in start-up. I guess it performed more than just a neat enclosure interface.
I added in a PCI-E SATA x2 adaptor so that I can run the SSD on there and the storage Drives on the onboard SATA x6. Hence I could add another Drive to the BTRFS Pool and another Promox Storage Location.
I managed to get working the PCI SATA Pass-through with ProxMox. Hence I now have S.M.A.R.T. monitoring on the drives with RockStor again. I also have HDD “Power Down” working, which saves on noise and electricity.
I also played with the PCI Pass through with the Sound Card and USB controllers and I was able to use them in Windows. It is Neat that I can still make use of the hardware that would normally be extra the way ProxMox works.
I could see this technology being used in a way that you could have on the one box with two independent work stations (with a dedicated graphics card and USB (for Keyboard, Mouse and sound) ) and run some Background Virtual machines for services like NAS, wiki, etc. (More work stations if mother boards support more than two graphics cards.) But I guess the savings would be minimal and the complexity is much higher.
I also had another project that is only related to this by the use of BTRFS. I’ve had this board around for a long time but never had a ideal project for it. The Atom D2700 ASRock Board with 4GB RAM is 32bit only, hence I was unable to use Rockstor on it. I did however find that OMV has the BTRFS CLI tools but no GUI and a 32bit install guide.
I now have Mirror 2x 8TB SATA Segate Archive Drives BTRFS cluster and a USB2.0 attached Mirror 2x 2TB SEGATE BTRFS cluster but Due to filling the Cluster, I’ve added a USB3.0 4TB WD to the Mirror and have been running a Balance for a long time now, 25% remain. I know the performance isn’t good with USB2.0 but it’s for Media Storage. OMV is running from a USB Flash Drive with module that reduces the writes to the drive.
This replaces a WD My cloud 8TB device storage location. OMV has BTRFS, performs better and can handle more USB if I need it to. I just need to see if I can get OMV to do anything else like Rock-ons (OMV normally supports other things but expects 64bit CPU)
Great and interesting write-up @b8two . I always enjoy reading about other user experiences and different build projects - gives me ideas for something for the future.