Best Possible Installation on Dell PowerEdge with Perc Controller

I am looking for some advice on the best possible installation method on a Dell PowerEdge server with a Perc RAID Controller. The server is a PowerEdge R510 with 8 2TB drives. As best I can tell there is no way to pass through the drive directly to the RockStor OS. I can only create a RAID 0 for each drive and then install Rockstor and each drive shows up separately, but of course I have no SMART information. I know this is not an ideal situation, but it is what I have right now. I am using this server to store backups of my production data, so it is not mission critical unless of course the production server goes down.

I have and tested Rockstor installed several different ways on this server. I have created a RAID 0 with 1 drive for the OS and then created a RAID 5 with the rest of the drives on the PERC Card and all worked fine, but from all I have read here I should not install Rockstor in a hardware raid situation. I have also created a Raid 0 for each drive then created a pool in Rockstor using Raid 1 for all the drives.

Knowing this is the only server I have available, I am looking for some guidance on the best possible installation for Rockstor in this case. Should I go with the Hardware Raid 5 or the Raid 1 from within Rockstor. The speed does suffer using the software Raid 1. I can backup about 80Mb/sec with the hardware Raid and only 60 Mb/sec with the Raid 1 in Rockstor.

Thanks in advance for the advice.

Chris

Depends how you want to cut it … Let me give you few options:

a) (my personal preference) get rid of perc (sell it) and buy that one:

it’s a semi decent SAS ( I assume your front cage has SAS extender :slight_smile: )
and just setup everything natively.

b) if your perc has a battery back up you can use it to overcome raid 5 write holle :slight_smile: and put JBOD btrfs on top of it (not sure why one would do that but it’s still an option.

c) use your idea of all disks being in raid0 and use raid 10 on to of them … now

now option a) is preferred because btrfs will use all the disks to it’s full potential, option b) and c) is terible because it not only hides s.m.a.r.t. away from you but aldo hides away geometry of all the drives away from btrfs (you know, heads cylinders, plates etc) so btrfs can not perform a typical optimisation for spinning rust like: elevator algorithm or allocation groups on neighbouring platers

Thanks for your response. I think if RAID 5 on btrfs becomes stable I will go the route of getting a new controller card that can just pass through the drives, but for now I am going to stick with the Hardware RAID 5 so I have access to more space. Once RAID 5 become stable on btrfs I will likely get new controller cards and use Rockstor in this manner.

Thanks again.

Chris

Just be veeeeerrrrrryyyyyyyyy aware that if you want to put a btrfs on top of hardware raid:

  1. you performance will not be the greatest
  2. you will not know what’s going on with your drives (no smart)
  3. if data on one of the drives will get corrupted and NOT flagged up by drive it self to RAID controller then: read carp from drive -> drive states “it’s OK” to raid controller -> controller does not try to rebuild strip because everything seems to be OK -> passes on data to btrfs -> btrfs gets data and compares it with CRC -> btrfs raises “IO FAILURE” error to you and from not on you can NOT access this specific file.

This is very important to remember that btrfs does not know if storage underneath it has any form of rebuilding it’s data from parity (there is simply no protocol for that) - so if a bad sector happens under a large database file and HDD will lie that everything is OK = you are f****

Remember hdd can magically swap bad sector with sector from spare pool and read data from there and fake “OK” to you - more drives does that than you can imagine. I’m telling you that because you walk into conversation with half decent hardware so theoretically you care about the data.

Understood, and again I thank you for your frank response. I do like the functionality of Rockstor and although it can be buggy, setup is generally very simple and it works. I just think that not being able to use Raid 5 is a highly limiting factor of the OS to be able to back up large amounts of data. It sounds like there may have been some break through’s in regards to the problem btrfs had with Raid 5 and hopefully that will be integrated into Rockstor soon. Once it is I will gladly switch to use the software raid.

Those break through miracles (read carefully comments from btrfs maintainers that alto those fix problems where nobody looked before - are not solving real issues with raid56) I would not recommend it for a good time.

Let’s say that they will overnight make raid5&6 100% stable and bomb proof, there is a tinny problem with using btrfs on different size disks, and there is a strong push for a major rewrite and functionality change … and as we all know major rewrites can not possibly break anything right ? :laughing:

Also, may I suggest “hp dl180 g6” it is available in 14 bay LFF configuration, you can pick one for ~80£ … populate with more disks and run in raid10.