OS Drive Mounting as Read Only

All of my drives, including the OS drive, are mounting as read only. If the pool is degraded and mounted as RO, the OS drive usually stays RW, right?

Is the fact that the OS drive is mounted as RO an indication that the drive is likely to be failing? dmesg indicates csum errors on sdi4, which is my USB drive.

If the USB drive is failing, is there a way I can pull the Rockstor configuration off of it so that I can more quickly restore? I did pull a config from the server the last time I set it up, and saved it on my laptop… but i got a new laptop and forgot to pull a new configuration :frowning:

I pulled the flash drive and attempted to make an image of it, reading of the drive failed. However, I did find a backup of the Rockstor config, so that should help when I get things running again.

Currently reinstalling the OS on a new flash drive.

1 Like

@Noggin Hello again.


That would also be a ‘possibly’ and:

That would do it, could be memory though, not just drive. And for that matter it could be PSU/drive-controller-cables etc. But something causes che scum errors and threw btrfs into RO.

Nice. Yes we save backup-configs to the system drive and we have the location documented here:

All configuration backups are stored in zipped json format in the /opt/rockstor/static/config-backups directory

Looks like we are missing a full stop on that doc entry actually.

Keep us posted. And check your memory via our newly revamped doc entries here:
Thanks again to @Hooverdan for their ongoing doc improvements by the way.

Hope that helps.


I pulled the USB drive and attempted to make an ISO image from it. Reading failed, so I’m pretty sure the drive is shot. Once I get my new OS drive up and running and all my stuff reconfigured, I might try to format the drive. I think the program I was using to make the image was doing block level reads, so I don’t think it just a file system error, but I’ll try a format anyway

@Noggin Hello again.

Once you get to this stage, and can safely do anything with this drive, a better approach to testing it’s entire surface usability, and sustained effort capability, might be to follow another section of our recently updated (by @Hooverdan ):
Pre-Install Best Practice (PBP): Pre-Install Best Practice (PBP) — Rockstor documentation
specifically the following sub-section:
ShredOS/nwipe purpose: Pre-Install Best Practice (PBP) — Rockstor documentation

It will write to every part of the disk DESTRUCTIVELY (on existing data) and can also then do a verify pass there-after: ergo you test the drive ability (in writing zeros) across it’s entire working surface, for both continuous write (thought it does intermittent veriry) then by way of the verify zero option there-after, it entire working surface to read back zeros. And given it’s continuous you also end up likely heating the drive up fully.

But again: only when all data on the drive is disposable as it will all be irrevocably overwritten.

Way better than just a wipe format as that touches barely any parts of the drive, rather than the entirely of it’s working area: along with provoking it’s ability to switch out any bad bits if it has spare sectors to do this type of self heal. These will show-up in smart later so you can always check if this has happened there-after.

Hope that helps.