Can anyone help me figure out what i am doing wrong? i am a frist time nas user and trying to learn linux, so please be kind
My system
16 gb kingston ecc ram
xeon 2246g cpu
5 - 12tb wd red’s ( raid 6 )
1 - samsung ssd ( root )
ROCKSTOR 4.1.0-0
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/share.py”, line 206, in post
add_share(pool, sname, pqid)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 645, in add_share
toggle_path_rw(root_pool_mnt, rw=True)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 654, in toggle_path_rw
return run_command([CHATTR, attr, path])
File “/opt/rockstor/src/rockstor/system/osi.py”, line 224, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/chattr -i /mnt2/audyn. rc = 1. stdout = [’’]. stderr = [’/usr/bin/chattr: Read-only file system while setting flags on /mnt2/audyn’, ‘’]
@goka Welcome to the Rockstor community form.
From the following part of the error message:
It looks like the pool (collection of drives) has gone read only. So it can’t make any changes to that pool as that would require a healthy pool that allows write access.
Btrfs (our chosen filesystem) will go read-only if it detects a fault with the filesystem. This is to prevent compounding the issue that it has.
To help those here on the forum help you, could you give us some more history and details of this systems setup. I.e. are the 5 X 12 TB drives raided using hardware (not recommended) or btrfs’s own device management/raid, i.e. a btrfs raid 6.
What I suspect has happened here is that our 4.1.0-0 “Built on openSUSE” defaults to read-only for the btrfs parity raid levels of 5 and 6. This is a disision made by our upstream base OS of openSUSE Leap 15.3 in this version. The btrfs parity raids are a lot younger than the other variant and are as yet less trusted. Hence our upstream defaulting to read-only.
Our approach to this, for those who wish to use these btrfs raid levels, is to install newer a newer btrfs software ‘stack’ via a newer kernel and filesystem than is standard for our default base OS. At least at time of writing.
The following HowTo is to address this update for those who want, or need, to take this risk:
https://rockstor.com/docs/howtos/stable_kernel_backport.html
See first our Redundancy Profiles doc section here:
https://rockstor.com/docs/interface/storage/pools-btrfs.html#redundancy-profiles
Which also indicates this known upstream default.
The HowTo gives one reason for folks needing a parity raid level as having a high disk count. I’d say you are on the edge of this with 5 disks. But a Raid 10 may serve you better for now, however it has only a single disk failure.
Let us know if this fully explains your situation. I.e. not real history here but for creating the pool and finding it’s read-only. This was a surprise for us initially as it was brought in a a new default for 15.3 and will likely go-away when we are, in-time, build on 15.4 or later.
Hope that helps.
The history is I built this in 2020 using unraid, but had issues with my USB getting corrupt that and the fact the docker and apps bit more indepth and the learning curve was bit to much for me. I kept a physical back up of my data so I just wiped all my drives only had maybe 3tb of data on it to start with.
I read another post saying raid 6 is only read so i made the pool to raid 10 earlier and got this ( i did also restart the nas after I changed pool to raid 10 )
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/share.py”, line 206, in post
add_share(pool, sname, pqid)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 645, in add_share
toggle_path_rw(root_pool_mnt, rw=True)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 654, in toggle_path_rw
return run_command([CHATTR, attr, path])
File “/opt/rockstor/src/rockstor/system/osi.py”, line 224, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/chattr -i /mnt2/audyn. rc = 1. stdout = [’’]. stderr = [’/usr/bin/chattr: Read-only file system while setting flags on /mnt2/audyn’, ‘’]
Honstly I don't care about using btrfs. I much rather have ZFS or raid 6. If there is away around using btrfs.I would love to know
I ran into the same read only issue with btrfs and raid5/6
After some poking around I stumbled on this web page
Unable to import BTRFS pool into Rockstor 4.0.8-0 which was a similar but slightly different issue
To distill it down here’s what I did which solved the to problem
echo “options btrfs allow_unsupported=1” > /etc/modprobe.d/01-btrfs.conf
mkinitrd
reboot
Good luck
Pete