A Balance process is already running or paused for this pool (ChronosIgnis). Resize is not supported during a balance process

Brief description of the problem

I’d like to change the RAID level of a 2 disk RAID1 pool, but rockstor thinks there’s a balance running while there’s no balance running.

Rockstor version 4.6.1-0 on openSUSE Leap 15.5. I probably should also mention I only just updated Leap from 15.4.

Detailed step by step instructions to reproduce the problem

Attempt to change RAID level from raid1 to single on a RAID1 pool.

Web-UI screenshot

Error Traceback provided on the Web-UI

Traceback (most recent call last): File "/opt/rockstor/.venv/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 68, in run_for_one self.accept(listener) File "/opt/rockstor/.venv/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 27, in accept client, addr = listener.accept() File "/usr/lib64/python2.7/socket.py", line 206, in accept sock, addr = self._sock.accept() error: [Errno 11] Resource temporarily unavailable

There’s a couple of failed balances but none currently in progress:

@unguul Hello there.
Re:

After such a dramatic change as an OS distribution update, it is always best to do a reboot. I’m assuming you have done this however. And that you followed our docs on distribution updating a Rockstor instance, i.e.:

Distribution update from 15.4 to 15.5: Distribution update from 15.4 to 15.5 — Rockstor documentation

And note that we now also have:

Distribution update from 15.5 to 15.6: Distribution update from 15.5 to 15.6 — Rockstor documentation

Not yet strictly required but 15.6 is our main target for the next Stable release.But currently has no stable rpm available. But we are in late Stable Release Candidate phase in testing channel: as of 5.0.14-0 from last week:

We have seen such false flags as this before. Try a reboot first, just in case, and if that does not work, try updating to 5.0.14-0 testing channel, and change back to Stable again to avoid accidentally installing anything newer from the testing channel as we will soon begin a new testing channel development phase where things get flaky again for a bit. This way we can see if the pending next Stable release also exhibits this same issue on your system. And hopefully then tend to a fix if it is still required.

The output of the following command, run as theroot user may also help us diagnose what has happened here:

btrfs balance status /mnt2/rock-pool

Substituting rock-pool for your Pool name.

Hope that helps. And keep in mind that almost all functional components have been switched out since our last Stable (4.6.1-0) to latest RC9: so this is a big update and will take take a few minutes longer than normal.

2 Likes