Unable to create pool

So I don’t have much to go on here as the only error I get when creating a pool is.
Unknown internal error doing a POST to /api/pools

12 x 1.9 =tb SSD’s
all the disk show as passed when looking at the status.

where can I grab some logs

Has your Rockstor NAS been updated recently? If so, first try rebooting to clear up any potential issues between running and installed versions.

Failing that, Can you please try adding the pool again and post the contents of /opt/rockstor/var/log/rockstor.log?

As there is a character limit on posts to the forum, please paste the logs to something like pastebin and link it to here.

These logs should provide us some insight as to what is going on.

https://pastebin.com/AA7c2d5q

Thanks!

@bryonbrinkmann Welcome to the Rockstor community.

Sorry for taking so long on approving this last post folks. The link triggered an approval requirement in the forum software for new users.

@phillxnet, handballing this back to you guys (Devs), the only errors I can see in the logs attached are relating to:

  • smartctl being called incorrectly in storageadmin.views.disk [_update_disk_state], perhaps it’s called incorrectly when using a flash disk?
  • smartctl attempting to start a self test with one already running in storageadmin.util [_handle_exception] (albeit with apparently 100% progress)

Does pool creation rely on either of these being successful? The second seems a decent candidate, and should perhaps have a UI message or an override (smartctl -X) in place if this happens.

NO Worries - I kinda figured it was my fault.

So here’s an update. I was able to create a pool going the LONG way around.

  1. I formatted each drive using mkfs.btrfs /dev/sd Whatever the letter was. (I did this to make sure I cleared everything of the drive).
  2. imported one drive into the pool.
  3. Erased the the 11 remaining drives using the erase button so they could be imported/resized into the pool.
  4. Tried all 11 at once and it failed. So I did one drive at a time and it took a bit but finally got all drives into the pool.
  5. changed the raid level of the pool from single to raid 5. Not sure it’ll like that in the long run.
  6. added zlib compression and ssd as an extra mount option.

19.80 TB SSD pool working so far. Copied about 5 TB over and no failures.

I had issue with this system when it was a (sorry in advance) Freenas box. It would offline drives or give me crazy errors that made me think it was hardware. But everytime I review logs or reboot all the drives reported correctly on the lsi controller and SATA bus. NO errors. So I wanted to confirm its an issue with FreeNAS and not hardware.

Any thoughts on how I could stress these drives???

@bryonbrinkmann,

Not sure that stressing the drives will reveal an issue for you, it sounds more like a potential controller issue to me.
That said, the easiest ways to stress the pool will either be a dd write from /dev/urandom or an fio test.
Details for these are readily available on google.

I can say however that I am already stressed by:

I strongly suggest you research BTRFS RAID 5 and choose a different option, or ensure that your system has an adequate UPS and the appropriate config to soft-shutdown the system in the event of power loss.

BTRFS RAID 5 is not considered stable and should be avoided for any data you are concerned about losing.

Fortunately, if you do choose to reconsider this, BTRFS (and rockstor) allows for changing RAID levels on the fly.

I did replace the controller and had the same issue. So far its been running Solid and still copying data. 12TB and going without a drive failure or issue. Whats your suggestion on the raid level? I don’t have a lot of experience with btrfs software raids but the link you provided doesn’t sound that great.

Mind you this is a home system and me just Screwing around so I’m kinda concerned about data lose but the other systems are the primaries.

I strongly recommend RAID 10, 1 or 0 depending on your needs (shown in my preferred order)
You’ll see a reduction in usable space, however the sacrifice in space for the increased stability is - in my opinion - worth it.