@kcomer Hello again. I have found that if you leave a browser open on the Disk page it can end up preventing the drives from powering down. But funnily enough doesn’t then wake them if you visit that page once they have powered down. Ie the polling seems to keep them awake but doesn’t wake them once they have entered standby. Also a few drive models are non responsive to > 20 minute settings. Take a look in the logs via @Flyer 's fancy new System - Log Manager to make sure there are no messages akin to:
Skipping hdparm settings: device ata-KINGSTON_SMS200S330G_50026B724C085CD1-part3 not confirmed as rotational
In the “Rockstor Logs” directly after attempting to set an idle spindown time.
The system will only apply settings (as indicated by the configured setting appearing next to the hour glass) if the drive can be confirmed as rotational.
Under the hood we simply use an hdparm -S120 command (for 10 mins in this example). So nothing special really. Oh and some drives will refuse to spindown unless their APM setting is also 127 or below (set by Rockstor via the hdparm -B switch). Though I haven’t found many that this affects. That is in part why we blended these 2 settings together. Note though that the drives do have to be idle for the set period prior to spin down.
Hope that helps and let us know how you get on, and thanks for offering to help on this one as well.
Just thought that we did have a report where an AAM setting on a drive was causing a false reading on the rotational status of the device and hence blocking the setting (and causing the ‘… not confirmed as rotational’ message). Please see:
If you are receiving this ‘Skipping hdparm setting’ message in your Rockstor logs when trying to add a setting.
25/Jun/2016 18:05:50] INFO [system.osi:753] Skipping hdparm settings: device not confirmed as rotational
Not sure which drive it would be referring to since I do have this running off a SSD drive but all my other attached drives are rotational.
So my ssd would not be rotational which is my boot drive and as I see here I can not see the AAM setting on this drive
udevadm info --query=property --name sda
@kcomer Hello again. Looks like for your setup the AAM is a ‘red herring’, went for this as first guess as since we have had this feature out from early in the last testing channel updates that’s the only problem yet identified / reported.
Also I didn’t realise that your settings were inactive so thanks for the screen shot. I should have asked. This means that when the disks were scanned prior to displaying this page either an error was encountered or ‘unknown’ was returned from the following command:
hdparm -C -q /dev/sdx
Either way the display will then disable all settings involving hdparm (the spindown setting and the APM setting) by greying out / disabling them (in storageadmin/static/storageadmin/js/views/disks.js). This looks like what has happened in your case.
If you could paste the result of the above command but with a device name showing disabled spin down settings we can see if it’s a parsing error or not.
Another issue is that from your udevadm output it looks very much like your drives will not be identified as rotational anyway, but we will cross that bridge when we come to it as I suspect the is_rotational() function used to assess this is going to have to be re-worked a tad in the future anyway.
For the time being if you could return the results of the above command we can take it from there. I suspect this is another special case requirement much as we had with the SMART settings I see you sporting there. Although it may be that the hdparm commands we need are just not going to function as expected through the LSI controller at all.
Incidentally the rotational message you received in the new fancy log reader is most likely from attempting to set a spin down time on the ssd and is_rotational() doing it’s job of flagging a non rotational device correctly, probably from the following line in that devices udevadm output:
You will hopefully be please to note that in a pending review code change that message now identifies which drive it pertains to (as shown in my last post); sorry forgot to put that in on the first go round.
Thanks for helping to better test / improve these facilities by the way and taking the time to read the rather long referenced thread.
Here is the output from several drives all the same
[root@rockstor ~]# hdparm -C -q /dev/sdb
drive state is: unknown
[root@rockstor ~]# hdparm -C -q /dev/sdc
drive state is: unknown
[root@rockstor ~]# hdparm -C -q /dev/sdd
drive state is: unknown
[root@rockstor ~]# hdparm -C -q /dev/sde
drive state is: unknown
[root@rockstor ~]# hdparm -C -q /dev/sdf
drive state is: unknown
Starting to think these drives do not support spin down
@kcomer Thanks, so it looks like the parsing from the “hdparm -C -q” is correct. Re your suggestion on employing additional tools I don’t think we are at that stage just yet.
So whatever the case we aren’t going to be able to read the power state of these drives but we might still be able to set their built in spin-down times, we just need to find a way that works through the raid controller, if there is one. On a little searching it does appear to be a common problem with some LSI cards that the hdparm commands are not ‘passed through’ as expected. To see if this is the case could you try the the following command to see what it returns:
hdparm -S60 /dev/sdd
Assuming sdd is one of the drives on the raid controller of course. This should set a 5 minute spindown. Of course the UI is not able to read the state (because of the unknown return) so that’s a bit tricky but I’ll leave that one to your ears.
I suspect you are going to receive something along the lines of: “Function not implemented”.
But let see first.
If that is the case then there may still be a way to set these drives spin down times with (as yet unsupported) additional custom smart options I’ve been reading up on. But we can move onto testing these once we have confirmed the “Function not implemented” or the like return from the above hdparm command.
Thanks again for your continued efforts on trying to weed out these issues, looks very much like we are in the work around faze now though.
@kcomer Yes so it looks like with your card we can’t do hdparm for this, but I have been looking into this myself since you raised the issue and I think the most promising ‘work around’ would be to use our old friend smartctl to try and do this, since it’s already aware of the ‘re-direction’ these cards seem to require.
There is a ‘–set’ option that, along with the hard won LSI specific custom smart options, may allow us to set the drives built in spin-down, akin to what hdparm normally manages to do. It has the advantage of addressing the same built in drive based facility and shares the same settings values (which are quite strange as it goes).
I have opened an issue (see below) to extend Rockstor’s custom smart options validation to include the ‘–set’ smartctl option, however it’s facility to set spin down is as yet unproven in the current context of how these settings are used. That is, Rockstor’s custom settings are applied to every smart call made by the system, specifically the pole driven smart available / enabled call used to populate the last column on the ‘Disk’ page. So it may need more extensive integration as an hdparm -S work around, which in the current release is only executed on initial config or change and there after only on boot via a dedicated systemd script; so not repeatedly as the custom smart options are.
So it would definitely help to evaluate the viability of this work around and if we need to incorporate some kind of ‘don’t run --set on every pole’ type exclusion (or wait until we are fully event driven) if you could identify how it behaves. Initially we need to know if for example the following command manages to execute without error and also does what we hope, ie set spindown successfully.
smartctl -d 3ware,1 --set standby,60 /dev/twl0
Which as you are now keenly aware (but for other readers) is device on port 1 on the first LSI controller named /dev/twl0 (down to driver / card type).
Obviously I’m hoping this works for you although you may have to do a few drives in order to be able to hear if it works, given the hdparm power status read issue with these controllers (ie the unknown return). Also of note from man smartctl we have “… there is no get option because ATA standards do not specify a method to read the standby timer.” so we can’t just run a command to see if the setting took. Rockstor’s existing spin down mechanism simply retrieves the setting stored in the file we create to set it in the first place; it would have been nicer to retrieve from the device itself but as documented this isn’t an option.
Anyway if this works and the chosen drives spin down when completely unused for the designated period (5 mins in the example) then it would also be really handy to know if a second application of the same smartctl command to a device known to be already in standby mode, ie spun down, will cause that device to spin up again. If so then I can add this info to the issue I’ve already opened as a result of this discussion:
but it would make this a fair bit more complicated as a work around path. Especially given the command would have to be executed at least once on every boot (at least it does in the hdparm variant of setting spin down time).
Otherwise we are stuck with using LSI’s own custom and varied utilities which are far less likely to be integrated into Rockstor (for some time at least), especially given they may well only work with certain kernel versions etc and as far as I know vary from card to card so another whole can of worms.
Let us know how you get on with the suggested --set variant of smartctl and just as a reminder this will have to be executed on the command line until the utility is proven to afford the time spent on the GitHub issue highlighted.
Thanks for persevering with this one as I believe LSI cards are a popular choice, not sure how popular spin down on such hardware is though.
This command seems to work and from what I have been able to determine it spins down the drives.
Only thing is Im not exactly sure that they shutdown I believe they have and the noise seems to be lower I didnt disable all drives because 2 of them are running my vm’s on vmware.