HDD Setup, pls help

Hey, I want to change to Rockstor from Freenas.
Actually I use a single 6TB HDD. Now I need more space for my data,
what is the best setup with most space/safety ? 3x 6TB Raid5 or 4x6TB Raid10?
I dont know Rockstor much, but I want back to a linux based system, I dont like freebsd.

Maybe someone can help or explain me, what HDD’s I should buy for a Rockstor setup with much space/safety.

Got an Xeon e5v3, 16GB DDR4 ECC. Rockstor gets an 120GB SSD.
80% Mediadata
10% Senseless stuff
10% Sensible data (Will get and extra Backup)

Sorry for my bad language skills, thanks in advance.

:wink:

Hi @xerwulf and welcome to :fire: :fire: Rockstor :fire: :fire:

Happy to know someone moving from freenas/bsd to Rockstor/Centos.

You ask for a most space/safety configuration, so currently my suggestion is a Raid10 and not Raid 5 (btrfs still having some problems over raid5/6 as you can see over some forum threads too).
Disk size: your choice
Disk speed: 7.2K rpm is ok, 10K is better, 15K is enterprise ok

First thing you’ll notice moving from Freenas to Rockstor with your Xeon e5v3 & ram: btrfs isn’t a RAMsucker like ZFS :slight_smile:

Having Rockstor on a different disk is good, but 120GB not really required (you can have a smaller ssd if you want)

Flyer/Mirko

1 Like

Welcome back to linux :)))))

So, first:
f**** what a system for a storage :open_mouth: e5v3 :open_mouth: :open_mouth: :open_mouth: :open_mouth: I’m not complaining, it’s just absolutely amazing to see so much horse power for just storage.

Second,
As Flyer said 120GB is a lot, I personally when setting up a new installation of rockstor don’t go higher than 32gb :slight_smile: So what I would like to propose to you is that BTRFS gives you a fantastic raid profile (sort of ride) called “dup” which means that you data will be duplicated on same disk = single sector dies than you got a backup in different place, this is veeeery useful !!! It will require you to use console ( command is “btrfs fi balance start -dconver=dup -mconver=dup -sconvert=dup /”)

Third,
Your actual question:
a) +1 to @Flyer RAID 5 & 6 on btrfs right now is broken (it will eat you data!!!)
b) +1 to @Flyer on disk speed, BUT from my personal experience, actually slower spinning disk (rpm wise) are less prone to failures. Another thing is slower spinning disks are cheapr = you can buy more of them to get same performance by simply putting more disks into your array.
c) in storage technology best to steer away from “revolutionary technologies”, so in large disks 4TB and more make sure that sector size is still 4KB not some exotic figures (like seagate tries to push down the throat of desktop users). There is proven record of “revolutionary technologies” back firing at people faces (IMB deskstar with revolutionary head technology that was crashing to the disk plater, than Hitachi deskstar with plater paint that was flaking of … yeah hitachi bought deskstar department from IBM and paid price for it)

Fourth,
High 5 for ECC ram !

Fifth,
I would go with 4x6TB in your case (unless it’s some exotic drives), BUT remember that raid10 has a minimum requirement of 4 drives so if one of your drives will die you will not be able to simply rebuild your drive pool to what ever you want. It’s possible but you will have to jump through few hoops for that and it will take some time. For that reason if you don’t care for speed I would suggest going for RAID1 (I personally start raid10 at six drives or more). Raid 1 will give you good performance over large sequential file reads due to nature of btrfs, but it will be lacking in some scenarios.

Sixth,
btrfs allows you to change a “profile” (which means sort of raid level, possible profiles are: single, dup, raid0, raid1, raid5, raid6, raid10) on the fly (unlike zfs) also in btrfs you can have a different profile for different folders.

So here is where btrfs shines, in your setup you could have:
metadata + system = raid1
sensible data = raid1
sensles data = single
media data = raid0

and with that you can save a precious space. I honestly never run different levels for my self, I just pop more storage in or remove data that I don’t care about. Anyway read about btrfs, and don’t ignore warnings … those are pretty seriously meant !

4 Likes

Hey, thanks for your time @Tomasz_Kusmierz and @Flyer,
I installed Rockstor into my VMWare ESXI yesterday, I made a 30GB disk for the Rockstor installation and added my 6TB HDD with RDM.
My actual 6TB HDD is a WD Red with 5,4k RPM, because its not an production system, its only for my two Kodi Raspberry Pies and storage.

I think for my sensible data, I will buy 2x 500GB SSD and add one 6TB HDD to the setup.
The server runs 24/7 at my home, so I would maybe better go without Raid1 and do weekly incrementel Backups via USB3? Less Power consumption and the data loss could be in the worst case of failure 6 days, What is okay for me.
Is it okay to run 2x single 6TB on btrfs?
When not I think I would stick with 2x Raid1.

Sabnzbd Rock-On runs nice by the way :D.
NFS works also very good.

Thanks for your helpful posts.

:wink:

Anyone know the Error: blk_update_request I/O error, dev fd0 sector 0 ?
The terminal is spamming this to me. But seems nothing important.
Solution: Remove the floppy drive from the vm :D!

And do you prefer the premium license for updates or not ?

Hi @xerwulf, after @Tomasz_Kusmierz I decided to tell you my first answer to your new nas system question (not about nas only)

My office (50+ users) has a e5v3 too and here my suggestion:

  • Take all that “horse power” and install Proxmox :fire:
  • Have part of Proxmox boot disk for :zap: :zap: Rockstor :zap: :zap: boot (you won’t have smart for this disk, but it’s just Rockstor boot, so we don’t care)
  • Add other disks for your Raid and pass them to Rockstor VM ( HDD passthrough on Proxmox ) :fire:

Just to let you understand what you can do with all that power, my current conf:

E5-2609 v3 (that’s a real entry level, but enough with our nice Linux world!)
8 GB ECC Ram
all nicely packed in an HP Proliant gen9

My VMs (pls remember the 50+ users env):

  • Debian Domain Controller with 1 processor and 512MB Ram (every time a talk with a WIN sysadmin celebrating Windows super functions etc etc etc making SysAdmins life easier I show a screenshot of this VM and tell "Please do your AD with Win Server and same hardware conf :stuck_out_tongue: )
  • Rockstor VM with 1 socket/2 cores processor and 1GB to 4GB Ram (Try do that over Freenas eheh…)
  • Tomcat container with lxc - 1 processor 512MB Ram dedicated to Eset Remote Administrator Server (Different container and not on DC so if something goes wrong Active Directory is always on)
  • Windows VM required for some old apps runing on IIS - 1 processor/1 core on 256-512MB Ram with and old old old WIN2000 server :laughing:
  • Debian container with lxc 1 processor 512MB Ram - App server for auto document production and ransomware hijacking ( with Incrontab files saved over some folder -> different actions/scripts - “ad hoc fake files” accessed or modified/ over some “ad hoc folders” -> ransomware prevention by killing Rockstor and Windows VM connections )
  • Rockstor development environment VM I use to code over Rockstor :heart:

Missing: disks passthrough to Rockstor (had them after Rockstor installation, so need again a long data migration :confused: over 500K+ files of sensible data)

Finally, enjoy your good e5v3 with that huge 16 GB rams :smiley:

Flyer/Mirko

1 Like

Being surprised by e5v3 for storage I was surprised that one would use it “just” for storage.
Also, hey I use proliants as well :stuck_out_tongue: Just when I need to run another machine I buy another DL180 G6 with two 5520 xeons and 24gb ram for 90 pound and roll with it :)))))

Just don’t forget I’m not a sys admin of any sort, I just coincidently do IT because I decided that for current project I need more software people so we did not hire any IT person.

Proxmox - cool toy !

A bit off topic:
Flyer, do you keep in touch with maintainer ? I got a pull request for read only snapshots that would mitigate the issues with ransomware on samba shares and shadow copy. Would be nice to integrate it and it’s a dead simple change.

Off topic 2:
Anyone knows of a bare mettal install of Jenkins ? I will need to soon setup a high performance system (4 x xeon + 12 x small but high iops & high bandwidth SSD) to compile from scratch a full linux distro with a lot of our code on it on every commit to git hub. You know - continuous integration setup for tone of code. And I’ll need to set it up within 9 - 12 months.

I’ve seen your github PR and I think it’s on the normal “review queue” by Suman

Ok, so there is a review queue … good to know.

My raid six lost a drive a little while ago and lost all the data. I’d avoid 5/6 and go with 1 or 10.

1 Like

Sorry for raising a dead threat but I’m just wondering if there’s a easier way to pass disks to rockstor ? I mean it’s all fun and games to use console … but with rockstor we’re trying to actually escape from it … right ?

Hi @Tomasz_Kusmierz,
actually passing hds to a Rockstor VM isn’t Rockstor related, so no way to make it easier via Rockstor (you configure HDD passthrough on your host side, not on Rockstor/guest side)

M.

believe or not but actually I know that :* I’m just frustrated that this proxmox thing is essentially removing all he goodnes of SAS / SATA / HOTSWAP / rockstor easy hdd setup -> and forces you to add remove drives from finger and making you reboot your VM do get the disks in :confused:

As a newbie Proxmox user I can relate to your frustrations.

We can only hope the proxmox developers work on this in the future, and make it easier to setup.

The rest of Proxmox is relatively straightforward, so one can hope they are working on this.

1 Like

I’m right now “up to here with proxmox” just spent unnecessary time trying to pass serial number of each drive to VM … all seems good - trying to import the old pool from disks … explodes :confused:

And on top of that smart is non existing, I can’t manage the power states of drive to get them spin down for the night …

Hi guys,
can understand your point of view, but actually adding that “serial” is quite easy.
Meanwhile had a look to Proxmox git repository, contributing over that too?? why not :grin: (serial option in Proxmox dashboard is a really small one, just need time to check it after some major Rockstor fixes)

M.

But there is no option to add directly attached disks in the first place so it’s not that simple.

Also I’m right now puzzled on why the rockstor is refusing to add the pool … it mounts every subvolume (that I can see on cosole) but just crashes with no error :confused:

You’re right Tomas
I was thinking about serial option and not disk passthrough:
having code on Rockstor so confused serial and passthrough, sorry :confused:

Tom wait,
are you trying having an already installed virtual disk (i supposed you had entire disk assigned to rockstor VM) moved to direct disk with passthrough?? you can’t do that

Host -> Rockstor guest having some storage (storage = a full disk, but being in a VM although you say “ok, let’s have entire disk”, you’re passing a file with size=disk size)
Finally you can’t migrate from a “virtual disk” (although it had size=entire disk size) to a real disk
M.

What I’m trying to do is to have disks that I was using as a storage pool (2x 2TB WD partitionless as rockstor wants them when creating pool in rockstor) connected directly into my VM machine … got both disks in “virtio1” and “virtio2” … I can mount btrfs by finger from those and everythign runs smooth … it’s just rockstor for some reason get’s confused on import :confused:

I can even see the pool operating propperly:

Any idea how to diagnose:

??