Slow Balance across two controllers (wasn’t controller but slow disk via quota)

tl;dr - Would drives connected to the MB and not the LSI card be causing my balance to take forever?

Sorry for the long post:

I recently moved from an MDRaid under Ubuntu to Rockstor to kelp keep my apps (Rockons) easier to manage/update, as well as to manage shares and general storage upkeep easier. Moving the data over was actually pretty easy, as I kept them all as single disks while breaking and shrinking the MDArray. Now that I have the MDArray removed and have all of my disks added to the Pool, Ive changed the btrfs pool to Raid10.

I am ok with losing that much space, as I was hoping to gain speed and again reduce overall management of the file system, but this is where my problems began…

I have 10ea 4TB WD Reds in one pool:

  • 8ea sitting on my LSI 9211-8i
  • 2ea sitting on the motherboard (Asus M5A97 Plus w/ 6 Sata ports running off of a SB950 chipset)

I also have

  • 1ea “Old” 1TB WD HDD (System Drive/Rockons)
  • 2ea SSD’s (kingston and intel) as another pool (plex_config and some files needed for faster access)

I have been running a balance on the “WD Reds” Pool for about a week and have made VERY little headway when looking at the btrfs balance status. I also see a ton of messages about “found extents” in dmesg from one drive, which I assume is the log when btrfs finds and moves files. It has been on one of my drives for a couple of days now, so following up on my assumptions, I started looking at where that drive is plugged in, and its on my motherboard.

The other drives haven’t logged yet, so I don’t know if this is truely the issue, but could it be possible that the drive sitting on the motherboard be causing the entire balance to slow down this much?

Thanks for making it this far, and hope that someone has a clue about this whole thing.

Ok, “quick” update.

It appears that it wasn’t the disks in two different controllers, but the quotas causing the issue. Once I disabled quotas, I was seeing the percentage complete increase about one percent every half an hour versus 1 percent every 30 hours.

After that finished balancing the drives, I re-enabled quotas, but the shares unmourned and the data was only accessible via the cli to that pool. I even tried to manually map the share, but that threw errors as well. I made sure that the quotas were enabled one last time and restarted. This didn’t fix the issue as now all of my shares show 0 (16k) used.

After searching the forums, I found a post about moving to the “testing” branch, and it fixing it on the HP NL Server. So I changed over to that one, loaded the new kernel and btrfs version, rebooted again, and the shares now show populated.

NOW here’s the next issue. I now am getting kernel panics after about 10 minutes and can’t, for the life of me, figure out what’s going on now.

I am to the point of wondering how to stay with rockstor? I love the idea of it, but with the balances going excruciatingly slow, chewing through cpu which leaves the system unusable, how do I proceed from here? If I reload the OS, how do I safely restore the shares without losing my data? Do you think that will even fix this?

Thanks for at least reading this far…