Constant disk access and power saving

Hello,


I’ve recently installed Rockstor 3.5-9 for testing purposes. 

It’s destined to be a home NAS. It’s all set up and populated with one boot drive and one data drive. It’s good. I like it! 

I would like the system to be as low power as possible, however I have noticed that the boot drive does not settle even when there is no access to the server (I even disconnect it from the network). From the dashboard I can see constant writes to the boot drive occurring. 

How do I find out what is accessing the drive? Is there anything I can disable to stop the constant access?

Many thanks! 

Hi,

Thanks for becoming a user and welcome to the Rockstor community! There’s a service called data collector that collects various metrics like share/pool/cpu/mem usage etc… and writes to it’s database which is on the root disk. For your usecase, you may not want that information being collected all the time. On my home NAS, I always keep it on mostly because I try to stress test every feature.

You can turn off the metric collection as follows

1. On the dashboard, there’s a on/off switch under “Metric Collection”. It’s on the left in the side navbar. You can turn it off there.

2. You can also turn it off from the System -> Services screen. The service name is “Data Collector”.

This should help. Let me know how it goes.


Thanks for the answer. The metric collection is turned off ( off as Data collector too) however there is still constant disk access. The disk activity light on the drive blink in a repeated pattern every 3 seconds or so. 

Was there a noticeable difference between with/without data collector?

There are other db reads and writes going on, but not that much. You can turn off replication and task scheduler if you are not using them. You could also turn off the service monitor entirely with the command /opt/rockstor/bin/supervisorctl stop service-monitor (turning it off is not supported from the web-ui, but you can turn it back on from the web-ui)

I am curious to find out how much you save, if any with various things turned off. Please share your observations if you can.

Thanks!

Thanks. I’ll keep an eye on this and get back to you. 

No, not a massive difference with the data collector off. 


I have installed iotop to monitor the disk io using flags -ao for cumulative collection of active processes. 

I have turned off all of the services in the services page as well as the data collector. Service monitor is still enabled. Over a 10 minute period, iotop gives:-

Total DISK READ :       0.00 B/s | Total DISK WRITE :     361.13 K/s
Actual DISK READ:       0.00 B/s | Actual DISK WRITE:     683.00 K/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
 2125 be/4 postgres      0.00 B    149.34 M  0.00 %  7.88 % postgres: rocky smartdb [local] COMMIT
  844 be/4 root          0.00 B     16.14 M  0.00 %  0.13 % [btrfs-transacti]
 2112 be/4 postgres      0.00 B   1276.00 K  0.00 %  0.09 % postgres: checkpointer process
 2116 be/4 postgres      0.00 B      9.47 M  0.00 %  0.00 % postgres: stats collector process
 7820 be/4 root          0.00 B    144.00 K  0.00 %  0.00 % [kworker/u4:0]
 5390 be/4 root          0.00 B   1008.00 K  0.00 %  0.00 % [kworker/u4:1]
29479 be/4 root          0.00 B      2.19 M  0.00 %  0.00 % [kworker/u4:3]
  843 be/4 root          0.00 B    112.00 K  0.00 %  0.00 % [btrfs-cleaner]

Essentially over 10 minutes: 140M of information is being written to the postgres smartdb! That’s a lot for a machine doing nothing:)

if I disable the service monitor using - ‘/opt/rockstor/bin/supervisorctl stop service-monitor’ as you suggest and then run iotop for a further 10 minutes I get:-

Total DISK READ :       0.00 B/s | Total DISK WRITE :       0.00 B/s
Actual DISK READ:       0.00 B/s | Actual DISK WRITE:       0.00 B/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
  844 be/4 root          0.00 B      9.23 M  0.00 %  0.11 % [btrfs-transacti]
 2112 be/4 postgres      0.00 B    540.00 K  0.00 %  0.04 % postgres: checkpointer process
 2116 be/4 postgres      0.00 B      6.72 M  0.00 %  0.00 % postgres: stats collector process
 7820 be/4 root          0.00 B    448.00 K  0.00 %  0.00 % [kworker/u4:0]
29479 be/4 root          0.00 B    384.00 K  0.00 %  0.00 % [kworker/u4:3]

So, the main culprit has disappeared! Much more reasonable. 

I’m curious though as to why there should be 9M of btrfs transactions for an idle machine. That might still stop the drive going into suspend mode. 

This is great information for us as we optimize Rockstor. We can perhaps give options for service monitor and data collector so the user can set amount and frequency of data collection. Here’s the issue to track this:

https://github.com/rockstor/rockstor-core/issues/585 Feel free to participate on github.

Regarding btrfs-transaction threads, I am not sure exactly what’s causing it, but perhaps the root btrfs filesystem is fragmented? I checked my systems and don’t notice this. But, to check if it’s triggered by postgres, you could turn off rockstor (systemctl stop rockstor) and see if it disappears after some(could be long) time and see if it repeats once rockstor is started (systemctl start rockstor) If I can reproduce on one of my systems, I’ll throw more ideas.

Here what I am seeing as well with rockstor running as normal. The data captured was just for a couple minutes.

  TID  PRIO USER    DISK READ DISK WRITE SWAPIN  IO>   COMMAND                                     
 2898 be/4 postgres 0.00 B     57.80 M 0.00 % 0.33 % postgres: rocky smartdb [local] idle
 1125 be/4 root          0.00 B     4.58 M 0.00 % 0.03 % [btrfs-transacti]
28769 be/4 root          0.00 B     0.00 B 0.00 % 0.01 % [kworker/0:0]
 2730 be/4 postgres 0.00 B   420.00 K 0.00 % 0.00 % postgres: checkpointer process
  661 be/4 root          0.00 B   512.00 K 0.00 % 0.00 % [kworker/u4:3]
 2735 be/4 postgres 0.00 B     2.04 M 0.00 % 0.00 % postgres: stats collector process
29911 be/4 root          0.00 B    48.00 K 0.00 % 0.00 % smbd
12539 be/4 root          0.00 B   192.00 K 0.00 % 0.00 % [kworker/u4:2]
 4427 be/4 root          0.00 B   640.00 K 0.00 % 0.00 % [kworker/u4:0]

And with rockstor stopped:

  TID  PRIO  USER    DISK READ DISK WRITE SWAPIN     IO>    COMMAND                                     
 1125 be/4 root         0.00 B     4.70 M 0.00 %  0.02 % [btrfs-transacti]
28769 be/4 root         0.00 B     0.00 B 0.00 %  0.00 % [kworker/0:0]
 2730 be/4 postgres 0.00 B   540.00 K 0.00 %  0.00 % postgres: checkpointer process
 2070 be/4 root         0.00 B     16.00 K 0.00 %  0.00 % rsyslogd -n [rs:main Q:Reg]
  661 be/4 root         0.00 B   144.00 K 0.00 %  0.00 % [kworker/u4:3]
 2735 be/4 postgres 0.00 B     3.34 M 0.00 %  0.00 % postgres: stats collector process
 2757 be/4 root         0.00 B     12.00 K 0.00 %  0.00 % smbd
12539 be/4 root         0.00 B   112.00 K 0.00 %  0.00 % [kworker/u4:2]
20745 be/4 root         0.00 B   384.00 K 0.00 %  0.00 % [kworker/u4:4]
18792 be/4 root         0.00 B   144.00 K 0.00 %  0.00 % [kworker/u4:1]

Thanks for reporting the numbers. Yes, smartdb which is the database for smart manager that collects metrics, is write intensive. We’ll optimize it but in the meanwhile, if you turn off data collector, it will save many writes.

/opt/rockstor/bin/supervisorctl stop service-monitor 

solved my I/O issues as well