@tonyhr0x Hello again.
Remember that in my instructions earlier I pasted both the command and the expected output, you are blindly cutting and pasting the entire thing, hence the result. We are currently looking for what is going wrong with your system when previously it was not. There does appear to be clues in the journalctl output but lest also clear up the root mount issue. You need to also do your own research on what is going on here. We cannot promise any level of technical support (at least yet). We are a DIY NAS project, that means there is an assumption of some technical expertise, and when it breaks you either have, on hand, your own expertise, or have available some. Pasting extremely well know output of a command in, as the command is an indication that I have miss read your own expertise here. My apologies.
What is the output of:
mount | grep " / "
Note that the previous commands leading text indicated the OS version via my use of that to designate the base OS (Leap 15.3) and the directory “~” that of the users ‘home’, and the user “#” indicating a root user. This tells folks more than a command but in this case was potentially misleading.
The output should be akin to what I pasted before.
I as the main developer and maintainer of The Rockstor Project do have to take care with my time spend here helping folks, but it can be rewarding and help to keep me in-touch with dificulties folks are facing with what the development team are producing. What we are trying to do here in this thread is help to find where you issues lie. And this mount command is to establish the state of the root mount. It is also directed to all others reading in an attempt help with diagnostic method. Your Web-UI state seems to be out-of-sync with the disk state, hence checking if your “ROOT” labeled pool, the system pool, has gone read-only. We have already seen there are not errors recorded by btrfs but our concern here is to try and track down why there is this out-of-sync situation. Getting your data back is strictly in the realm of your backup provisions. But if your system drive is duff, then it will do little good to simply do a fresh install onto the same duff system drive. Hence the attempted check.
You are not wasting my time, that would be my area of responsibility . And I’d like to help where I can, but I do have to focus on the Rockstor elements here. You log entry indicates some serious compatibility issues between the kernel (ACPI subsystem) and your hardware. You need to look these up. Also remember you are running a now EOL OS, hence the work here to identify the root cause. If you system drive is duff, you can re-install a fresh Rockstor instance (all data drives disconnected just in case) and then power-off, reconnect all data drives, and the import the pools into the new system. But if the system drive has issues then that would be a dead-end (or a pending one).
You may also want to look-up how to get smart info form the command line, if the Rocksor Web-UI is failing you as it may be trying to run on broken legs.
Also look-up those red errors in your logs. These are all linux central issues and nothing Rockstor related other than we run on a base linux OS. But you are seeing BIOS compatibility error in ACPI. That’s not normal, and likely not healthy. Ergo using a newer kernel, and addressing your base OS EOL scenario also. But you do need to work at interpreting the command offered here. We teach here when we can but it’s always difficult to assess folks knowledge. My apologies again for pitching things wrongly. Incidentally we are working, via such interactions as this, on getting a collection of commands that folks can run, hopefully in an automated sense, that can help with assessing/diagnosing what is going wrong with someones system. It can also help to have chronological info on what lead to a failure.
Thus far here you have not done all that has been suggested. This can help, if not in any other way than to avoid demotivating folks trying to help.
I would ask if others here on the forum could review what has been examined thus far here as I’m a little pressed time-wise with getting our next stable release ready and organising our new fiscal setup so that we can in-time stand up commercial support options for folks in a rush/corner as it were.
Do go over what may have been missed to date in this exchange to see if there is still something requested and not tried/supplied. Diagnosis is tricky at the best of time.
From what we have to date found out:
Suggests a failed system disk (the disk I/O error stands for Input/Output = can’t read/write). This is good in a way, as it’s not your data disks. Assuming you haven’t used the ROOT pool for any non-replaceable data! Which brings us back to the mount command request. All drives fail at some point. And again, look to retrieving SMART data via the command line for that drive. Running on broken legs (ro, or broken ROOT pool) means the command line may be your only recourse. Plus there is always the re-install of a fresh non EOL rockstor instance on a fresh know-good system drive if you do need this assistance. Just take extreme care with not overwriting any data pool member if you have not back-ups.
Hope that helps.