Same issue.
Regardless of what raid configuration I use on the GUI, I end up with the single configuration.
Same issue.
Regardless of what raid configuration I use on the GUI, I end up with the single configuration.
Did you do this from the GUI? No matter what raid config I try, I always end up with a single config.
@asapkota Yip, perhaps you should give your drives a ZEROWIPE, to make sure nothing left on it from previous raid configs. I always advise to use DBAN zerofill with verify on after each write, it takes a while, but with benefit, you know your disks are fine before using again for a long time!!!
@TheRavenKing If this is a mission critical configuration I think we should be careful recommending raid5 in btrfs as per my previous references. I don’t think it’s considered fully matured just yet.
Nothing mission critical. Thanks
@asapkota Great. I was just getting a little worried you were storming head long into dangerous waters, especially with Rockstor’s current bug set not really helping you. Have fun.
@phillxnet @asapkota Raid 5 with 3 disks is like playing roulette. I asked him what he wanted. Another remark is this still is software raid, Why not spend a few bucks for a LSI 8 port raid card or whatever brand, then the hardware card does the work and you just create the pools you like.
Please read my post, I created raid 10 with 4 disk, perhaps a cosmetic bug after the update???
@TheRavenKing Nice reference on DBAN there. I have used this myself to force drives to repair themselves. Being a cheapskate on occasions. As you say once its been run all the way through at least then you know that the drive can at least read every part of its surface; and borderline sectors can often be found and re-allocated as a result. Best to check smart stats afterwards to reassure yourself that there are no sectors that could not be re-allocated.
Thanks for all the help guys.
I am zerowiping the drives now.
I am going with
Raid 10 - 4 x 1 TB
Raid 1 - 3 x 1 TB
I have seen this too even on virtualbox and @mchakravartula also reported this behavior. I think it’s something we are noticing due to new btrfs-progs or the kernel. If you run a balance job, it should fix the raid level, though I haven’t tested it enough times to say it works every time.
@TheRavenKing ZFS and btrfs are not akin to regular raid as they have on disk checksumming. It is often recommended that these file systems are given the bare drives as they are a mark up from hardware raid which can’t really tell which of the copies of data is the correct one when drives mess the data up. ZFS and btrfs can as they do on disk checksumming. That is where these newer technologies come in and why they are being adopted instead of hardware raid which can cause silent corruption a great deal more easily than ZFS or btrfs.
A nice reference on these file systems was provided by a recent forum post from @Bearbonez who kindly referenced this article Bitrot and atomic COWs.
And I though this chaps YouTube explanation RAID: Obsolete? New Tech BTRFS/ZFS and “traditional” RAID of the difference between traditional raid and that build into the likes of ZFS and btrfs was quite good. They are a-kin but not the same. This chap even demonstrates ZFS and btrfs’s ability to repair damage on the disks by purposefully dd ing small chuncks of nonsense onto the drives and then re-inserting them back into either ZFS, btrfs, and a hardware raid controller. Spoiler - tldr or watch - the hardware raid controller fails to recover the data and the other two repair all the damage he did and return the original files seemingly unharmed. It’s pretty impressive. Though btrfs has come a long way since he tested it.
Anyway given you are obviously familiar with regular raid I’d like to know what you think of those references. Sorry their a bit long but it’s all I had to hand that I could remember that was good.
It’s all a bit new to me really but then isn’t it always in this here computer gubbins.
Please do come back with the news after the full wipe and new install, but remember don’t update whatever you do before the pools are created.
That may not be possible. I have another issue where my SATA card does not get recognized unless I upgrade to 3.8-3. So I have to update before being able to create the pool unfortunately.
@asapkota I can’t be 100% sure but it looks more like a setting in BIOS, when your ready with zerowiping reset BIOS to ‘default’ and try again, some bios have optimal settings, but they might give you trouble!
@phillxnet Sure, but I can only give you my practical experience. We have Hardware raid and software raid, BOTH can go wrong, BOTH can replicate errors. ZFS likes a raid controller which behaves like a JBOD and creates checksums and make a lot of calculations in Memory, hence you need very good memory with ecc checks and those memory modules are costly. The latest and greatest FreeNas needs at least 16Gb to work fine… Hence you see the movement to go away form ZFS, it’s to costly and old hardware can’t even handle such amount of memory. I can only give you my personal experience and I still like a hardware controller, either with JBOD for ZFS or the newer btfrs being able to create my own RAID 0,1,5,6,10… whatever configurations because every situation or requirements can be different. I was looking around for a simple NAS just like many others I think, I know that bttfs is new and so is Rockstor. It’s all about personal experience, like your self, I bet you will just like me always DBAN a disk before it goes back even if it is in the same raid configuration. ok enough waffle on. time for a drink.
I did what @suman said and it worked. Create the pools but showed up as ‘single’. Then went and balanced both pools and when they completed, the pool now says raid 10 and raid 1. thanks guys for all the help.
@suman @asapkota @phillxnet I decided to do the update and after that I did the Balances, but I can no longer access the raid 10 pool. Getting attached error. Also noticed that the progress bar is not updating, nor the Uptime in [top right corner] Seems still a bit of buggy.
@TheRavenKing Yes I think this is at least related to the outstanding issue I referenced earlier. This issue was initiated by an earlier forum post Error when clicking on pool. It’s a shame this has appeared right as we are getting all this enthusiastic involvement but never mind, At least it’s fairly easy to reproduce so that’s a bonus at least. I’m afraid my involvement on the code side is extremely “junior” for this one.
Just want to update everyone that with 4.2.2 kernel and btrfs-progs 4.2.1, which are shipped with Rockstor 3.8-8 update, the “single” profile problem no longer is reproducible.
Previously, when a Pool is created with a raid profile other than single, some data appeared to be distributed with single profile and that’s why the Web-UI showed “single”. A balance job, however fixed it. Now with 3.8-8, this is no longer an isue.