Plex Media Server not working anymore after reboot, Raid 56 warning?

Hi,

I’m running on Rockstor 3.9.1 and my Plex Media Server was running fine…all linked/mapped shares have been read into the library and it worked fine…after a few days I recognized that I can’t access my photos (videos were still accessible), so I thought the linked/mapped folder got lost - not mounted anymore, so I decided to turn off and on Plex again.

Now I can’t access Plex via the browser anymore
http://192.168.1.10:32400/web/index.html same link as I used before

also not accessable via the Plex app on my iOS devices neither.

I’ve already tried to reboot my HP Mircoserver - no changes - not accessible.

Any idea what I should do?
What should I provide you with additional log files or terminal output to help me?

EDIT: Als re-installing Plex did not help :cry:

Thank you,
Roland

That looks suspiciously similar to:

Could you try to get more logs like the following?
Maybe you could try the following to get some logs, and then turn on the Plex rockon on the webUI.
docker logs -f plex-linuxserver.io
or
journalctl -fx

@Flox Thanks for your quick reply.

docker logs -f plex-linuxserver.io
Results in ongoing message “Starting Plex Media Server” … had to abort with Crtl+C

journalctl -fx
shows the following

[root@rockstor ~]# journalctl -fx
– Logs begin at Mon 2017-07-17 15:03:30 CEST. –
Jul 20 20:25:45 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:46 rockstor dockerd[13373]: Starting Plex Media Server.
Jul 20 20:25:46 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:46 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:47 rockstor dockerd[13373]: Starting Plex Media Server.
Jul 20 20:25:48 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:48 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:48 rockstor dockerd[13373]: Starting Plex Media Server.
Jul 20 20:25:49 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:49 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:49 rockstor dockerd[13373]: Starting Plex Media Server.
Jul 20 20:25:50 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:50 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:50 rockstor dockerd[13373]: Starting Plex Media Server.
Jul 20 20:25:51 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:51 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:51 rockstor dockerd[13373]: Starting Plex Media Server.
Jul 20 20:25:52 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:52 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:52 rockstor dockerd[13373]: Starting Plex Media Server.
Jul 20 20:25:53 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:53 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:53 rockstor dockerd[13373]: Starting Plex Media Server.
Jul 20 20:25:54 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:54 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:54 rockstor dockerd[13373]: Starting Plex Media Server.
Jul 20 20:25:55 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365
Jul 20 20:25:55 rockstor kernel: BTRFS warning (device sdb): csum failed ino 266 off 52584448 csum 3825288279 expected csum 1920287365

BTRFS warning ?

Someone else in the forum also just got a problem with Plex and could resolve it by removing the image and trying another install:

In your case, however, maybe this BTRFS warning signals something wrong with one of your drives (sdb). Maybe a scrub would fix it, but I would wait on @phillxnet feedback for that.

I’m not sure whether these two issues (if the second actually is an issue) are related, however.

@Flox
I’ve tried to re-install it twice - I’m not sure if this is the right path to continue in trying it for another couple of times.

EDIT: Strangely enough the Emby Server keeps on going fine with all media!

@phillxnet
Both, regarding those BTRFS warnings
I’ve performed some self tests and the log shows no errors (WebUI under disk management)

[root@rockstor ~]# btrfs filesystem show
Label: 'rockstor_rockstor'  uuid: 4b9b3b4f-6697-46fc-a7ca-6fc36e8b3b43
        Total devices 1 FS bytes used 3.05GiB
        devid    1 size 113.88GiB used 26.03GiB path /dev/sda3
Label: 'rm_pool'  uuid: d5829eb2-f19f-4329-83a4-c71261a270bf
        Total devices 3 FS bytes used 1009.28GiB
        devid    1 size 3.64TiB used 510.01GiB path /dev/sdb
        devid    2 size 3.64TiB used 510.01GiB path /dev/sdc
        devid    3 size 3.64TiB used 510.01GiB path /dev/sdd

I’m not a BTRFS expert, which commands I should run to get more clarity what is ongoing here?

Thanks,
Roland

@glenngould Hello again. I think @Flox’s suspicion is correct.

Those messages appear to indicating an issue with your rm_pool, or a subvolume therein, hence the re-install not working (assuming you are re-using a rock-on-root subvolume/share on that pool for example):

as you are only changing the rockstor_rockstor pool.

Presumably you mean the S.M.A.R.T self tests. If so it is entirely possible for btrfs to find errors that the drives self test cannon expose. A simply example of this might relate to data transport outside of the drive, ie the drives own self test is unlikely to be able to tell if a cable connection was faulty for example. Anyway that aside the best practice in a particular circumstance depends on context. If this is your only copy of the data, and you care about it, then your best route is to attempt no repair, or anything that exercises the drives (just in case of pending hardware failure) and only pursue (initially anyway) read only methods. The first of which would be the btrfs Restore approach as it is a tool with the primary aim of scraping all that it can from an an unwell volume/pool:

The btrfs restore utility is a non-destructive method for attempting to recover
data from an unmountable filesystem. It makes no attempt to repair the filesystem,
which means that you cannot cause further damage by running it.

However this can take an extended amount of time and if you have current (and recently tested) backups, or the data is not worth it, your options are more numerous. It may be that a single subvolume of your pool is unwell and in which case, given the above caveat, you may be able to get up and running faster by simply deleting the problematic subvol and re-creating it. And obviously loosing all non retrieved or backed up data in the process.

Note that Rockstor’s behaviour when pools or shares are unwell is currently not good, however it is my current focus and some improvements in this regard are already in development. So unfortunately, at least in fail situations such as yours, your most likely going to have to resort to command line tools for data retrieval / maintenance / repair etc.

I don’t see a report of the btrfs raid level you employ, if it’s parity based (btrfs raid5/6) then your repair hopes are considerably less.

me neither unfortunately however there are many on the forum how are very knowledgeable in this area so it may just be a matter of time for more specific assistance to step forward.

It may help to paste the output of the following commands here to assist others with helping you:

cat /proc/mounts | grep btrfs

and assuming your “rm_pool” is mounted:

btrfs subvol list /mnt2/rm_pool

and for space issues:

btrfs fi usage /mnt2/rm_pool

Note that unwell btrfs volumes / pools will often go read-only which can be the first sign of a problem; and consequently upsetting apps that rely on write access. Lets see what the output of those commands is and hopefully others can chip in with more btrfs specific knowledge as Rockstor is knows to not behave as well as it should / could (at least currently) on unwell volumes: again, I’m working on that currently and with sufficient focus should result in an easier time going forward :slight_smile:.

Hope that helps.

@phillxnet thanks for your detailed reply - there are the outputs as requested
But why is Plex not working but Emby if there is an issue with the drive?

[root@rockstor ~]# cat /proc/mounts | grep btrfs
/dev/sda3 / btrfs rw,noatime,ssd,space_cache,subvolid=258,subvol=/root 0 0
/dev/sda3 /home btrfs rw,noatime,ssd,space_cache,subvolid=257,subvol=/home 0 0
/dev/sda3 /mnt2/rockstor_rockstor btrfs rw,relatime,ssd,space_cache,subvolid=5,subvol=/ 0 0
/dev/sdb /mnt2/rm_pool btrfs rw,relatime,space_cache,autodefrag,subvolid=5,subvol=/ 0 0
/dev/sdb /mnt2/nextcloud_data btrfs rw,relatime,space_cache,autodefrag,subvolid=1677,subvol=/nextcloud_data 0 0
/dev/sdb /mnt2/pihole-config btrfs rw,relatime,space_cache,autodefrag,subvolid=1647,subvol=/pihole-config 0 0
/dev/sdb /mnt2/plex-data btrfs rw,relatime,space_cache,autodefrag,subvolid=877,subvol=/plex-data 0 0
/dev/sdb /mnt2/zm_mysql btrfs rw,relatime,space_cache,autodefrag,subvolid=965,subvol=/zm_mysql 0 0
/dev/sdb /mnt2/nextcloud_db btrfs rw,relatime,space_cache,autodefrag,subvolid=1675,subvol=/nextcloud_db 0 0
/dev/sdb /mnt2/rm_share btrfs rw,relatime,space_cache,autodefrag,subvolid=258,subvol=/rm_share 0 0
/dev/sdb /mnt2/duplicati-conf btrfs rw,relatime,space_cache,autodefrag,subvolid=993,subvol=/duplicati-conf 0 0
/dev/sdb /mnt2/zm13_data btrfs rw,relatime,space_cache,autodefrag,subvolid=1048,subvol=/zm13_data 0 0
/dev/sdb /mnt2/zm13_mysql btrfs rw,relatime,space_cache,autodefrag,subvolid=1047,subvol=/zm13_mysql 0 0
/dev/sdb /mnt2/emby-config btrfs rw,relatime,space_cache,autodefrag,subvolid=984,subvol=/emby-config 0 0
/dev/sdb /mnt2/zm_data btrfs rw,relatime,space_cache,autodefrag,subvolid=964,subvol=/zm_data 0 0
/dev/sdb /mnt2/nextcloud_app btrfs rw,relatime,space_cache,autodefrag,subvolid=1674,subvol=/nextcloud_app 0 0
/dev/sdb /mnt2/nextcloud_config btrfs rw,relatime,space_cache,autodefrag,subvolid=1676,subvol=/nextcloud_config 0 0
/dev/sdb /mnt2/rock-ons-root2 btrfs rw,relatime,space_cache,autodefrag,subvolid=803,subvol=/rock-ons-root2 0 0
/dev/sda3 /mnt2/home btrfs rw,relatime,ssd,space_cache,subvolid=257,subvol=/home 0 0
/dev/sda3 /mnt2/root btrfs rw,relatime,ssd,space_cache,subvolid=258,subvol=/root 0 0
/dev/sdb /mnt2/plex-config btrfs rw,relatime,space_cache,autodefrag,subvolid=897,subvol=/plex-config 0 0

root@rockstor ~]# btrfs subvol list /mnt2/rm_pool
ID 258 gen 594440 top level 5 path rm_share
ID 803 gen 594599 top level 5 path rock-ons-root2
ID 807 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/01a44504c48d528cf64d2691e5d362f328962560eb0427c6f53fb2300df87bd9
ID 808 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/97e9fc98a8bdb50045400594330b50d79ae6e8b3bf90bb7b63c34751f4c495e0
… (removed some lines)
ID 951 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/77e62349a569d7d3d26e62c0aedb36571fa699b92890a03e6824fc992f1f5767
ID 952 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/d3d008d7bfcf5157ae5b322970725c1a924817175e3d3f49b29694a8913ccf56
ID 964 gen 361115 top level 5 path zm_data
ID 965 gen 361113 top level 5 path zm_mysql
ID 966 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/a4d82a8632705921eb8cf00e08e0033858d6c17ba53f7455c53b7b979f40a737-init
ID 967 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/a4d82a8632705921eb8cf00e08e0033858d6c17ba53f7455c53b7b979f40a737
ID 970 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/bf9e1d1eb9975542458d7e602d0623572459ae2519a1f1c8d9b65ed1bb53bec8
…(removed some lines)
ID 1105 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/d4775e99ca24701fb02180b5abaa1336de0c0a3dcb09120b52406cbb9544d530
ID 1106 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/1c725242d7819a80e2108dcafd0c9d9eb1da0fae9a2ccdf93086007033203ab6
ID 1107 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/1cbd42b3340298ee7a7d4ccb0acb1af3865cd73c6eabd3007a3c7109ab4e39b8
ID 1108 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/7c82c4f000abf1af59defbf00dd43f36aae712c728c281fe3c195e045b6dfd39
ID 1109 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/2aa853368b348ee7ef2c30fb60cf637b898c6b48a11d736cfdb3e7af52e2f4dc
ID 1110 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/3191a1906b4d164a4f75ce55949cbbf566f372b6b6387fc1d1398a716c5f86f8
ID 1111 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/8a224965fb5227f4885c4c3c9d247cc901934b4f1d4ffab3bccc56c7b36a9d40
ID 1112 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/738b1928910a6980cf57bdc8b6ac48160512f42a12bfc5193172710b9acf0a8e
ID 1113 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/b28a233db04cdb5a873f5d95ee352cbce630b4e7672e1ff57b281eb661741020
ID 1114 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/ac0f3d1aa6c4deee0cfc8c69c316c26c3277bc0fb0d3c5f312b0786e71f9b532
ID 1115 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/e78ec475e83aa0d8147f896ee5fd5895f1d02ba49ec00d389c22f4fbe019cd66
ID 1116 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/5169e3904594baac3829921fa03bce76bbc6ba57345a949c894defcce77e37db
ID 1117 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/2e521b49468e783f92032a7b94c68e6497396a4d6364e5af73dca406b951017e
ID 1118 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/5c5ed09a73d20808eeb7cf5930fdcdf47a5de589997b76d1f10536a1e710f863
ID 1119 gen 361113 top level 803 path rock-ons-root2/btrfs/subvolumes/b1d3fec7e8a4f2aba579277c9668f4af3dc5956b85a9c79736faf14d9875f98a
ID 1647 gen 594021 top level 5 path pihole-config
ID 1648 gen 464127 top level 803 path rock-ons-root2/btrfs/subvolumes/f771b3cbfdc2c77f1a0bc9375523b3e2ae61d9a5952d570fd05915b8b738524c
ID 1649 gen 433673 top level 803 path rock-ons-root2/btrfs/subvolumes/fb8aa0547ff50e3a7be95a39a12bbb7996824e006c63626d26159dc320237bc9
ID 1650 gen 433675 top level 803 path rock-ons-root2/btrfs/subvolumes/6f125f191737c9318d1b0c72990a9baa29c2a1ed9381292ca83f84a2ab41a903
ID 1651 gen 433677 top level 803 path rock-ons-root2/btrfs/subvolumes/5d6c3c87412cc1beaa0cd3d29ccd0a1ae5a7e2d8341a9b4574d328e8e3782985
ID 1652 gen 433679 top level 803 path rock-ons-root2/btrfs/subvolumes/ccc54187df0e68acb5e4cc522d8bd53845c238a886fc6e867d8251d0deaa30ae
ID 1653 gen 433682 top level 803 path rock-ons-root2/btrfs/subvolumes/c31bc7748bed032923a7dc87c2ec69437a9a0b80fcc0bb1ff3f8a37ce6eb8d5e
ID 1654 gen 433684 top level 803 path rock-ons-root2/btrfs/subvolumes/ba97d759fb0a48b2073756bc2ddc92f7f7e73fb1cd85a540840ecb4826453c69
ID 1655 gen 433686 top level 803 path rock-ons-root2/btrfs/subvolumes/b152452a74632e8c2b8d135a9a53b6fc86152656488fe867c252585341b69802
ID 1656 gen 433688 top level 803 path rock-ons-root2/btrfs/subvolumes/1c604aee310292595b4b0b7d7aa25b100d9e8f35d7e159df6471cda01eb2e995
ID 1657 gen 433690 top level 803 path rock-ons-root2/btrfs/subvolumes/3df04cd6bca34310d518a4a93c95d15beba4dfd7e4bcf24d1c713fcc0479b3b2
ID 1658 gen 433691 top level 803 path rock-ons-root2/btrfs/subvolumes/d8f8fd2a84641d7dcc66b275837617eb7bbd0f5aba065a4c7e0e4ef99e0fd3b1-init
ID 1659 gen 594029 top level 803 path rock-ons-root2/btrfs/subvolumes/d8f8fd2a84641d7dcc66b275837617eb7bbd0f5aba065a4c7e0e4ef99e0fd3b1
ID 1674 gen 593949 top level 5 path nextcloud_app
ID 1675 gen 594662 top level 5 path nextcloud_db
ID 1676 gen 593949 top level 5 path nextcloud_config
ID 1677 gen 593949 top level 5 path nextcloud_data
ID 1678 gen 464101 top level 803 path rock-ons-root2/btrfs/subvolumes/24b9e8e5e5c8c4fbbcd4a79546e5bd0f5755e2c5a586fd8b36bb3c1253562946
ID 1679 gen 464103 top level 803 path rock-ons-root2/btrfs/subvolumes/cee4b66841c65b8cb0484097dee5b755157032279f893bc5f23c497783c92d69
ID 1680 gen 464105 top level 803 path rock-ons-root2/btrfs/subvolumes/330646a4f5ee96ea4b3a2abd47d311697faedbaace7fb1ff7865f08fd2d43f90
ID 1681 gen 464107 top level 803 path rock-ons-root2/btrfs/subvolumes/87baa49c31b6ec5dd4399254dbf274f67ca36722674e3249e5fd74a514b7f274
ID 1682 gen 464109 top level 803 path rock-ons-root2/btrfs/subvolumes/b34353e990e107fd7de9e974a9cda0cfab43bd4faf3f2091b1de9ca2c296bcdd
ID 1683 gen 464111 top level 803 path rock-ons-root2/btrfs/subvolumes/8ef9d11e65b971ffed492375c3f6348405701e6a68a2e60925826be4752da08d
ID 1684 gen 464113 top level 803 path rock-ons-root2/btrfs/subvolumes/73cb62e658c84a132386099eb780693fa724645195419ab0793063e8086e0301
ID 1685 gen 464115 top level 803 path rock-ons-root2/btrfs/subvolumes/510360c78fafdef821aca40803b7d9a6fa716a120374996ad5667840c4900bd5
ID 1686 gen 464117 top level 803 path rock-ons-root2/btrfs/subvolumes/decd20fcde84bff039202269b899822243dddff7e43c9b6788c1b55a14a82f9f
ID 1687 gen 464119 top level 803 path rock-ons-root2/btrfs/subvolumes/4e27160ff6a8a47802d95120e28607ea5ed0ed626f48411ffde11476fe1fa00c
ID 1688 gen 464121 top level 803 path rock-ons-root2/btrfs/subvolumes/4379deb86e25fd5a199eed2c2e1c13584ae14093872a63dda4c4a71628b6dca2
ID 1689 gen 464124 top level 803 path rock-ons-root2/btrfs/subvolumes/fd28758314c0c589c221f17722cacb59499703b2fcb5ea4f1b6e88d08025159a
ID 1690 gen 464125 top level 803 path rock-ons-root2/btrfs/subvolumes/e6271b2c80c6d90717f5bbc5933ee425718b888272aa8d8dff62a49112e951a4-init
ID 1691 gen 594021 top level 803 path rock-ons-root2/btrfs/subvolumes/e6271b2c80c6d90717f5bbc5933ee425718b888272aa8d8dff62a49112e951a4
ID 1692 gen 464130 top level 803 path rock-ons-root2/btrfs/subvolumes/89285a8e996e3e31dfc099bc3b412e4e447e7ea821669a4a08e6a3360447050e
ID 1693 gen 464132 top level 803 path rock-ons-root2/btrfs/subvolumes/69377fac7b95bfaddf45861169bacae4eff3ea36614cb0c610d64b0b89f481c0
ID 1694 gen 464134 top level 803 path rock-ons-root2/btrfs/subvolumes/796ffe80485511565106a336be34165493e0cc2792e325b72a042c0d6b742a32
ID 1695 gen 464136 top level 803 path rock-ons-root2/btrfs/subvolumes/63082b5b8ce191562d1e864de782bbb3bee28ff377ff8d34de58e72296f9e83b
ID 1696 gen 464138 top level 803 path rock-ons-root2/btrfs/subvolumes/b58addd62e459d30a4e877492a146af69ef548232b1b355e7204f00705980def
ID 1697 gen 464140 top level 803 path rock-ons-root2/btrfs/subvolumes/e6d7a6849d35d1c8910c2b309d9033c4023e8635bab52fb6bc507906fac0a65e
ID 1698 gen 464142 top level 803 path rock-ons-root2/btrfs/subvolumes/35f2959fe910164f547d1ae145060ad2f7972f65e6ae31218547b66f7ecbf7e0
ID 1699 gen 464144 top level 803 path rock-ons-root2/btrfs/subvolumes/2b84c65f30d87dada37c7d6fca6f2281c65312a3d16c5cbc8aef64a1fafa570c
ID 1700 gen 464146 top level 803 path rock-ons-root2/btrfs/subvolumes/3668a63d18cdf37d1304b53601f54b21c4bfd75fe4b33ae6de11c677eedd3169
ID 1701 gen 464148 top level 803 path rock-ons-root2/btrfs/subvolumes/9595e7a94703ca6da7e3ab25b61e6e0d6f1aebcb1b845117a9ae16ae18350c12
ID 1702 gen 464150 top level 803 path rock-ons-root2/btrfs/subvolumes/4de39ad3a42c43e4c76ef0b763149c13bc2cd2bc727eca0c4a3bcbef4db96e4f
ID 1703 gen 464152 top level 803 path rock-ons-root2/btrfs/subvolumes/7c3ff3af03b3959b9a28bc7c347f863eb022f7a5a166b4c55b1439e79dcbac9d
ID 1704 gen 464154 top level 803 path rock-ons-root2/btrfs/subvolumes/bd33e4983ab610340da5dbdc56a434a7616ce22fb3ff9d247b60c4ead5dfaea6
ID 1705 gen 464155 top level 803 path rock-ons-root2/btrfs/subvolumes/09d3a7393dc6046a828d1a692629f2b1a995fd4e1f497b275590ecb9a4d0d40a-init
ID 1706 gen 594659 top level 803 path rock-ons-root2/btrfs/subvolumes/09d3a7393dc6046a828d1a692629f2b1a995fd4e1f497b275590ecb9a4d0d40a
ID 1735 gen 512456 top level 803 path rock-ons-root2/btrfs/subvolumes/2021f41d52fd75c7af687abb0f0418a671db4c108f0afca3fb10c23c4715b611
ID 1736 gen 512458 top level 803 path rock-ons-root2/btrfs/subvolumes/c53bde20f371f453e9a3444c3d682dc02f4d55e0f7a1c25dedfe850de77c1861
ID 1737 gen 512460 top level 803 path rock-ons-root2/btrfs/subvolumes/9d206c9fb6108a6d46b1466e161927904a28b18fc6d49a93e392c93e42378254
ID 1738 gen 512462 top level 803 path rock-ons-root2/btrfs/subvolumes/d0f079317ea84c47130aa31ff026034c89816966d185de17338de231f1924a01
ID 1739 gen 512464 top level 803 path rock-ons-root2/btrfs/subvolumes/a419a5c850817011cd5f1d8014ec451b39db38bd4d393c19318e80607d3119a6
ID 1740 gen 512466 top level 803 path rock-ons-root2/btrfs/subvolumes/b2cb323eab633f9ac5a6f85ac9e76885b1d664b75f96c5ed469dd01ec17f3b2b
ID 1741 gen 512468 top level 803 path rock-ons-root2/btrfs/subvolumes/54f1e8b3efed736b7176b8a8c14534e6087ca5ed23260ab58c40b385735c127c
ID 1742 gen 512470 top level 803 path rock-ons-root2/btrfs/subvolumes/d7ae92fe354c17c723aa22060dde6dc0d81f3e886eed821d8be0994b2fe1c627
ID 1743 gen 512472 top level 803 path rock-ons-root2/btrfs/subvolumes/114837200a36f48df2ed2a84b532a35bbe9c05e7d149660c4e7c6fd0e0731633
ID 1744 gen 593959 top level 803 path rock-ons-root2/btrfs/subvolumes/65fc992f4df970927a44c64499bd538ac042c55a5a7e88357f2286be3570d30a
ID 1776 gen 561316 top level 5 path .snapshots/rm_share/rm_share_201707072330
ID 1794 gen 593960 top level 803 path rock-ons-root2/btrfs/subvolumes/4f09235777b90695485f5b9a664e53d576ea6358a8878b51e2ba5290389602c3-init
ID 1795 gen 594677 top level 803 path rock-ons-root2/btrfs/subvolumes/4f09235777b90695485f5b9a664e53d576ea6358a8878b51e2ba5290389602c3

[root@rockstor ~]# btrfs fi usage /mnt2/rm_pool
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
Overall:
Device size: 10.92TiB
Device allocated: 0.00B
Device unallocated: 10.92TiB
Device missing: 0.00B
Used: 0.00B
Free (estimated): 0.00B (min: 8.00EiB)
Data ratio: 0.00
Metadata ratio: 0.00
Global reserve: 512.00MiB (used: 0.00B)

Data,RAID5: Size:1014.00GiB, Used:1005.14GiB
/dev/sdb 507.00GiB
/dev/sdc 507.00GiB
/dev/sdd 507.00GiB

Metadata,RAID5: Size:6.00GiB, Used:3.99GiB
/dev/sdb 3.00GiB
/dev/sdc 3.00GiB
/dev/sdd 3.00GiB

System,RAID5: Size:16.00MiB, Used:112.00KiB
/dev/sdb 8.00MiB
/dev/sdc 8.00MiB
/dev/sdd 8.00MiB

Unallocated:
/dev/sdb 3.14TiB
/dev/sdc 3.14TiB
/dev/sdd 3.14TiB

All, anyone able to see here the root cause of above Raid warning and how this should be connected to my Plex which is not working (Emby and Samba share working normal)?

What is the best way to check disk raid 5 health?

Thanks!

Just a thought. I had something similar at one point.
But as you have some FS issues it might not work.

What I did what this:

  1. Stooped and Uninstalled plex rockon
  2. Pulled latest image from terminal: docker pull linuxserver/plex
  3. Reinstalled plex rockon
  4. Make sure ALL rockon’s is started. Run this command in terminal to cleanup old docker stuff: docker prune system

@Fredrik thanks for your reply, I’ve followed your steps but no change…Plex Web interface is not appearing - Chrome browser shows empty page “This site can’t be reached” :cry:

Just out of curiosity did you try to access it with https? If you enable encryption as required in plex than you will not reach the page without added the https.
What does the docker logs say after you did that change… can you paste some from the container?

Hi @Fredrik , sorry for my late resonse…using https or not does not make any difference. Problem still present.
image
Where can I find docker related log files?
Within the WebUi of Rockstor under the log manager I have not seen any related entry.

Due to the Raid 56 error posted above - I’ve updated the system to 3.9.1-4 which includes the new kernel (4.12)…no change from that update.

Thanks!