Can you remind me of the steps to do that?
Edit: I think you are right, a reboot broke Samba also once again.
This is something that needs fixing ASAP
Can you remind me of the steps to do that?
Edit: I think you are right, a reboot broke Samba also once again.
This is something that needs fixing ASAP
Since my media center is relying on NFS working, I had to become a litle bit creative.
I turned of both NFS and Samba services in the GUI, asked Rockstor to reboot, with them disabled.
After reboot, I waited a couple minutes, and then re-enabled both.
Now I have both Samba and NFS access.
Well, I found / experienced another error.
Trying to see SMART data for my disks, when hitting refresh on the disk, I get an error saying:
Error!
local variable ‘capabilities’ referenced before assignment
The error is present on all disks in the NAS. When I hit refresh to get updated data, it errors out.
Thanks @KarstenV, i noticed that too. I wished to have released an update with fixes by now, but things are bit slow due to Thanksgiving holiday here. But I’ll push the update soon!
~ ᐅ systemctl status -l smb
smb.service - Samba SMB Daemon
Loaded: loaded (/etc/systemd/system/smb.service; enabled)
Active: inactive (dead)
Nov 28 20:09:10 rockstor systemd[1]: Dependency failed for Samba SMB Daemon.
~ ᐅ systemctl list-dependencies smb
smb.service
├─rockstor-bootstrap.service
├─system.slice
└─basic.target
~ ᐅ systemctl status -l rockstor-bootstrap.service
rockstor-bootstrap.service - Rockstor bootstrapping tasks
Loaded: loaded (/etc/systemd/system/rockstor-bootstrap.service; enabled)
Active: failed (Result: exit-code) since Sat 2015-11-28 20:09:10 CST; 48min ago
Process: 2234 ExecStart=/opt/rockstor/bin/bootstrap (code=exited, status=1/FAILURE)
Main PID: 2234 (code=exited, status=1/FAILURE)
CGroup: /system.slice/rockstor-bootstrap.service
Nov 28 20:09:10 rockstor bootstrap[2234]: File “/opt/rockstor/src/rockstor/scripts/bootstrap.py”, line 43, in main
Nov 28 20:09:10 rockstor bootstrap[2234]: aw.api_call(‘network’)
Nov 28 20:09:10 rockstor bootstrap[2234]: File “/opt/rockstor/src/rockstor/cli/api_wrapper.py”, line 85, in api_call
Nov 28 20:09:10 rockstor bootstrap[2234]: self.set_token()
Nov 28 20:09:10 rockstor bootstrap[2234]: File “/opt/rockstor/src/rockstor/cli/api_wrapper.py”, line 79, in set_token
Nov 28 20:09:10 rockstor bootstrap[2234]: raise Exception(msg)
Nov 28 20:09:10 rockstor bootstrap[2234]: Exception: Exception while setting access_token for url(https://192.168.2.2): HTTPSConnectionPool(host=‘192.168.2.2’, port=443): Max retries exceeded with url: /o/token/ (Caused by <class ‘socket.error’>: [Errno 111] Connection refused). content: None
Nov 28 20:09:10 rockstor systemd[1]: rockstor-bootstrap.service: main process exited, code=exited, status=1/FAILURE
Nov 28 20:09:10 rockstor systemd[1]: Failed to start Rockstor bootstrapping tasks.
Nov 28 20:09:10 rockstor systemd[1]: Unit rockstor-bootstrap.service entered failed state.
I’m experiencing the same issue. SMB shares all register as blank after update and reboot. Seems it’s a known issue, but just wanted to make sure it was posted somewhere. I’ll recreate my shares and won’t reboot for now.
Thanks guys!
Thanks @forrest_xu. @roweryan has also reported this problem in this issue. I am working on it right now and will push out an update today. I am putting in a retry logic as the root cause for set_token problem is elusive atm.
Thanks to @phillxnet for bugfixes and @KarstenV, @roweryan and others for dealing with the annoying bugs related to rockstor-bootstrap with the last update. We just rolled out 3.8-9.07 testing update.
We are mostly in the testing period right now as I like to release 3.8-10 stable update soon. Having said that, it may take a few more days. We may not be totally out of the woods with some auth token related issue that was causing bootstrap service to not always start as expected. Plus there is more replication related work left. And perhaps one or two hot fixes.
So here’s the log for this update
https://github.com/rockstor/rockstor-core/issues/1027
I am happy to announce that 3.8-9.08 is out. This update concluded all the necessary changes I’ve wanted to(and some I had to) make to the replication feature. I made it robust against most, if not all, potential network related failures. Documentation and UI polishing will follow soon, but I don’t want to delay the stable release as functionally speaking, replication is working pretty good. We’ve also made some UI changes that I am sure we’ll all welcome.
We’ve also been able to rc-test quite a bit and have a small list of things to address before releasing 3.8-10. I hope to make the release no later than this weekend, hopefully sooner. I don’t think anyone tinkered with replication in the last few updates. I’d be thrilled if you do now
Log for 3.8-9.08
3.8-9.08 broke my web GUI, I get ’ Unknown client error doing a GET to /api/dashboardconfig/ ’ now. I updated from 3.8-9.06.
Looks like it for some reason add extra /'s in url’s like: https://xxx.xxx.xxx.xxx/#/services. Manually typing https://xxx.xxx.xxx.xxx/#services fixes it.
OK, just pushed 3.8-9.09. We haven’t found any serious problems in testing so far, but fixed a few minor things. Here’s what’s new in this very minor update.
It seems to be working now, looks like firefox wasn’t properly refreshing the page and added something extra in urls. After some more CTRL-SHIFT-R it finally sorted itself.