Finally build my own NAS

The 2Tb in my desktop were nearly full, so I was looking for a NAS system for the last 6 months. First I was looking for the Synology or Qnap System, but the price of these and that I wasn’t sure if they would be “future proof”, brought me to the decision to build my own.

But now I had an even bigger choice :confused:
FreeNAS? BSD hm
OPM? no zfs/btrfs, don’t like the Interface
Rockstor? to new?

And which hardware should I use?

But after a lot more reading and with a bit pressure from my old drives (one did start to throw errors), I did move on with Rockstor. I like the concept of btrfs, even if it is not 100% stable but I don’t plan to use Raid5/6 soon. Rock-ons seem to be are nice and easy way to add services. And last this community seems to bee really helpful.

System:

  • Fractal Define Mini Tower
  • Asus B150M
  • Intel G4400 2x3.30GHz
  • 8GB DDR3 Ram
  • 120GB Kingston SSD for system
  • 2x WD RED WD60EFRX 6TB (accidentally only ordered only one HD)

Setup and configuration of the system was easy, and everything worked as planed.
I could setup my shares, samba, plex, transmission, and my own jDownloader docker without
any trouble, even that i have not many experience with it. :slight_smile:

Today the second hard drive was delivered and I added it to the pool, and try to change from Single to Raid1. After the balance job was done, and a reboot the webinterface still shows single cofiguration. But btrfs gives me this result, which looks like Raid1 for me.

[root@rockstor01 ~]# btrfs fi df /mnt2/pool1/
Data, RAID1: total=496.00GiB, used=495.24GiB
Data, single: total=1.00GiB, used=0.00B
System, RAID1: total=32.00MiB, used=112.00KiB
Metadata, RAID1: total=1.00GiB, used=570.83MiB
GlobalReserve, single: total=192.00MiB, used=0.00B

I have now to look into it, or some of you knows a solution.
but nevertheless i’m happy with Rockstor and my setup.

If I solve this last issue, all my files can be moved to Rockstor.

Thnaks for the great work and for reading

2 Likes

@grebnek Welcome to the Rockstor community. Thanks for the ‘User Stories’ entry, these are great to see and quite a nice system you have there.

As per your report I have confirmed this behaviour and created the following 2 issues as a result:

and:

I’m not sure but in the testing channel updates we recently added the capability to add drives to a single raid array. Prior to this it was a feature we didn’t account for and didn’t allow. It looks like there is a little more to be done here to properly account for this raid change but at least now we have the issues. There have also been some large changes to drive names and how they are handled so that is another potential candidate. So it looks like your accidental single drive purchase has lead to a little more than was expected. All good though but I would for the time being not do any more raid changes on this pool until the second of the above issues has been resolved. I have added a note to both issues to update this forum thread with any significant progress.

Hope that helps and thanks for reporting.

I see that you have already begun on the diagnosis of this one over on GitHub. See you there.

2 Likes

@grebnek In a somewhat redundant message, at least in your case as you were the one who provided the pull request that fixed your reported issue, I would like just to update this thread as per usual with a link to the Rockstor release announcement that first included your fix: against one of the issue I opened as a result of your report in this forum:

As of Rockstor 3.8-14.09 testing channel update your fix is now included.

Thanks again for the story and the fix.

And in a similar vein their are fixes pending a pull request that resolve the related issue found, ie that of Web-UI updates of Web-UI initiated in-progress balance tasks. I have added a note to this issue to update this forum post accordingly.

1 Like

@grebnek Just an update on the other issue created as a result of this thread:

As of Rockstor 3.8-14.14 testing channel update at least Web-UI initiated balance operations should be better reported and should now be aware of the paused, pausing, cancelled and cancelling states and be able to report the current progress ‘mid balance’. Not perfect yet but step by step.

Thanks for your help on these issues.

2 Likes

Old topic but maybe I’ll chip in,

There is a known issue with btrfs balance and profile change. Essentially when you change a profile, balance takes a point at which the file system is and starts to rebalance the FS with new filters. Now there is a glitch caused by how data is committed to the FS - so a single update to anything within first 30 second when balance starts can still be performed with old profile. So if your FS updates an access time for a file, you modify something like a syslog, or even a metadata for that matter - this will get pushed to disk with old profile. IF very unlucky, this can initiate a whole new allocation block (usually 1GB) that will accept more data with old profile.

So if changing a profile for your FS (here called pool) always perform a “double tap” technique :slight_smile: