Formating new drives

So this might sound like a dumb question, but when I installed the new drives and got the pools setup it didn’t give any evidence that it formatted the drives to BTRFS. If it did format them, it sure happened quickly. I would think that 3 3TB drives in a RAID 5 would take a little longer then instantly to format and setup. On a side note, when Im using SAMBA its reporting in Windows that the network drive is NTFS. Does that mean the NAS drives are NTFS or is SAMBA just reporting that so Windows can access it?

Welcome to BTRFS, where a lot of things happen in constant time, almost instantly in many cases.

I don’t have an answer to your NTFS question, but I’ll check it out soon and let you know what I see. Some others(@felixbrucker maybe?) on the forum may know the answer.

btrfs indeed speeds up the setup of raid models as its aware of the underlying data and data above, so when creating a raid with btrfs there is no need to calculate parity or copy bit for bit to the other drive(s) as the filesystem itself knows there is no data that needs to be “preserved”. When setting up mdadm raid, exactly that happens and the whole disks are read and parity/mirror data is written, even if it isnt needed at all.

On your second question regarding ntfs showing up as filesystem for the samba share, yes indeed that is just a reporting from samba, the fies are sitting on the btrfs pool, samba shares them and emulates a “ntfs” filesystem/behaviour on top of that.

I see. So would that cause any issues if I were to go from say a RAID 1 to a 5? I setup virtual drives and it let me make the move, I just dont know how it will handle the data if I were to try it.

btrfs can change the raid level on the fly with a balance, it preservers your data und rearranges everything to fit the new scheme. Your Samba “Windows” permissions are preserved too, as they are also just bits and bytes in some table file from samba to emulate the ntfs partition. I have converted raid10 to raid6 back and forth, worked flawless, but im unsure what happens when a drive fails during such conversion (if this conversion takes long).

only one thing I wanted to mention here. because I am in the middle of a migration. the balancing (in my case for sure) takes ages. I am doing the raid 5 balancing for 3 drives and 3TB of data and the system is doing the balancing since 15 hours on my system. Downside, the whole system is pretty slow.
Also do not forget to have enough space left on your pool to do the balancing otherwise you run into an error which is not getting “clearly” reported nor checked on the Web-UI side.

@felixbrucker and @tobb555
during my balance process from single btrfs to raid5 I do see something like this on the CLI

btrfs filesystem usage /mnt2/p1_r1/
Device size: 5.91TiB
Device allocated: 1.60TiB
Device unallocated: 4.31TiB
Device missing: 0.00B
Used: 1.54TiB
Free (estimated): 7.56TiB (min: 4.42TiB)
Data ratio: 0.58
Metadata ratio: 0.50
Global reserve: 512.00MiB (used: 11.41MiB)

Data,single: Size:1.60TiB, Used:1.54TiB
/dev/sdd 1.60TiB

Data,RAID5: Size:1.17TiB, Used:1.16TiB
/dev/sda 464.76GiB
/dev/sdc 729.76GiB
/dev/sdd 729.76GiB

Metadata,single: Size:3.00GiB, Used:1.80GiB
/dev/sdd 3.00GiB

Metadata,RAID5: Size:3.00GiB, Used:1.55GiB
/dev/sda 1.00GiB
/dev/sdc 2.00GiB
/dev/sdd 2.00GiB

System,single: Size:4.00MiB, Used:336.00KiB
/dev/sdd 4.00MiB

/dev/sda 1.02MiB
/dev/sdc 2.01TiB
/dev/sdd 421.75GiB

btrfs filesystem df /mnt2/p1_r1
Data, single: total=1.60TiB, used=1.54TiB
Data, RAID5: total=1.17TiB, used=1.16TiB
System, single: total=4.00MiB, used=336.00KiB
Metadata, single: total=3.00GiB, used=1.80GiB
Metadata, RAID5: total=3.00GiB, used=1.55GiB
GlobalReserve, single: total=512.00MiB, used=1.11MiB

Without know too much about btrfs, but the output for me, indicates that you would be able to restore the already balanced raid5 data but not the “single” data if you lose a disk during conversation.
Especially over time the “single” data information shrinks and the raid5 grows in my case of the ongoing balancing to raid5

1 Like