SMART value don´t show at webinterface

Hello folks,

Iam totally new t CentOS and rockstor. But I got it running now :slight_smile:

Just run into the annoying bug that it don´t won´t to wipe the old filesystem on disk and needed to do it on command line (hope you guys fix that later on)

Anyway. A bigger problem I think is that I don´t get anny smart values on the web interface. While the command line version is working (smartctl -a /dev/sda /sdb)

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red (AF)
Device Model: WDC WD30EFRX-68EUZN0
Serial Number: WD-WCC4N1LX7PFZ
LU WWN Device Id: 5 0014ee 20c66eac5
Firmware Version: 82.00A82
User Capacity: 3,000,592,982,016 bytes [3.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2 (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Sun Dec 18 09:50:51 2016 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

Etc etc…

I also have the gut feeling that spin down does not work. But I´m copying all my files over to the box right now. So no idea right now :slight_smile:

Related to spin down: Is it possible to get rockstor to the S3 state for example?

@unomagan First off welcome to Rockstor.

I am not familiar with any bug concerning wiping an existing file system so if you could explain the steps you took to delete the existing file system, via the Web-UI, and how it failed, ie an error message or what ever we can open an issue to address the specific short fall. As is the mechanism to wipe an existing drive is available by clicking on the cog icon that appears next to a device name containing incompatible existing partitions / file systems. This is documented in the Disks section of the official docs and more specifically under the Partitioned disks and Wiping a Partitioned Disk sub-sections. The cog icons popup tool tip also indicates it’s purpose.

Internally Rockstor then simply executes the following command:

wipefs -a /dev/disk/by-id/devname

If you have uncovered an issue / bug here then that’s great but we need more info on what the failure is in order to improve based on your experience.

OK hopefully this one is easier: have you tried pressing the “Refresh” button. I know this is ineligant as it should be pressed for you especially on first visit and this inelegance has been noted in the form of the following currently open GitHub issue:

As regards the spin down do let us know how this goes, can be pretty tricky this one as any drive access may end up waking another drive in the same pool. This all depends on the nature of the access of course and the raid level. As is Rockstor simply uses the hdparm command to attempt to configure the given drive to the desired idle timeout value, but given this value is not actually agreed across manufacturers, and in some cases flat out ignored, there can be some ‘your mileage / timeout values may vary’ type thing to it. The switches used in hdparm are -S and if also specified when configuring via the hour glass icon the -B switch.

I am not aware of any work within Rockstor to establish or test whole system S3 suspend and resume functionality, although modern processor C states can pretty dramatically reduce CPU power consumption if they are enabled on your system, which I think they usually are.

Hope that helps.

Hi!

you are right, refresh show them. (SMART Values)

It was basically something like this:

It showed a little icon saying it is an mdraid member. No gear button to wipe the fs. Needed to do it manually.

@unomagan Hello again.

OK, I get it now cheers; yes we don’t really support mdraid features in the Web-UI and as such there are no facilities to administer them; such as create, delete etc. So currently mdraid members are labelled with the ‘i’ icon explaining their role in the system. No delete option is provided as given they are currently outside of the Rockstor realm I, at the time, thought it best we simply label them and leave well alone. The mdraid recognition itself was added as a stop gap to accommodate certain requirements that were as yet not met by btrfs. The hope is that later we can adopt the btrfs way throughout but as it doesn’t yet exist and there was an interest in mdraid on the system disk we needed to at least acknowledge these drives member status. Prior to that change the cog was offered on these devices as usual and users were given no indication that they were serving as backing devices to md devices; obviously this was less desirable than where we are now as those not aware of their significance could accidentally delete them. There is a documentation issue opened as a result of the thread you sighted that addresses this deleting raid members issue:

And a link here to the issue opened as a result of that thread:

Hope that helps with a little context on where we are on this one. There is obviously room for improvement here but by keeping Rockstor predominantly focused on btrfs for it’s data drives we greatly simplify it’s user experience and UI consistency but we also needed to at least accommodate this very basic level of mdraid support.

My understanding of Rockstor’s current direction is to not become “everything and the kitchen sink” as this role is already well fulfilled by a number of other NAS solutions and I at least think that, as a consequence, Rockstor has a significantly simpler user experience. But there are of course ongoing efforts to extend it’s capabilities so it may well be that this failure to offer mdraid member re-assignment (delete option) gets some attention as time goes on. I’m hoping that the next testing channel cycle of updates should see some enhancements in disk management ‘under the hood’ and they may well help with later adding a tad more basic support for mdraid arrangements such as may address this ‘no delete for mdraid member’ situation. I personally would rather see more user feedback on what mdraid member belongs to what md device prior to adding a delete option for members and this does currently have an outstanding issue:

Other contributors to Rockstor may of course disagree with my sentiments in this area though.

Thanks for the help and info.

I run into another issue on vamware. Will open another post for that :slight_smile: