Hi @ArmyHill01 , and welcome to the community!
Thanks a lot for sharing your experience and feedback, I personally particularly appreciate and welcome it .
I agree with you that Portainer is a very useful tool for those with an existing setup and needing complete customization. I myself am using it for this reason .
My first reaction was: āyes, you can indeed just turn on the Rock-on service (which starts the docker daemon), and then simply set up your own containers whichever way you want (via portainer, for instance)ā. I then wondered whether you would also mean controlling which docker version you install and with which settings, etcā¦ This case would, too, be doable, but I canāt guarantee what would happen if you attempt to turn on the Rock-on service later on. Indeed, turning on the Rock-on service for the first time set the docker.service configuration with some slight customization to better fit with rockstorās system (and btrfs, mostly). This is the step that might introduce some incompatibility if you were to first install and run your docker version on your own and at a later time decide to turn on the Rock-on service, as your previous configuration may get overridden and the different docker version (if any) may be incompatible with that configuration. Note, however, that this is based on my memory and Iāve never tested that so I actually may be wrong and everything be smooth.
Most importantly, note that Rockstor is in currently transitioning to a rebase onto openSUSE (see updates and rationale in a separate post by @phillxnet and his other posts therein). Under this process, there is a current pull request proposing to simply source the docker.service configuration straight from the docker package itself (and then apply rockstorās settingsāincluding the user defined onesāon top of that).
rockstor:master
ā FroggyFlox:Issue2044_DockerConf
opened 03:19PM - 21 May 19 UTC
Fixes #2044
In Leap 15.1 rc, the docker package was updated, breaking our usā¦ e of `docker-opensuse-leap.service` due to unmet dependencies.
As proposed in a previous rework from @phillxnet (#1989, see below), we can move to sourcing the docker-ce package's configuration file directly and apply our custom settings onto it. This will help keep our custom settings in sync with any changes in the upstream docker-ce package.
https://github.com/rockstor/rockstor-core/blob/1102e2805a9290704ed6f3fd8963dbab37674c5e/src/rockstor/smart_manager/views/docker_service.py#L79
Note that these changes are proposed to be done only for OpenSUSE-based Rockstor builds in order to keep full compatibility with our existing centOS builds.
### Summary of changes
This PR simply sources the package's `docker.service` file from `/usr/lib/systemd/system/docker.service` if `distro_id` is either `opensuse-lead` or `opensuse-tumbleweed`.
### Turning the Rock-on service ON
#### CentOS
```
May 21 10:50:01 rockdev systemd[1]: Reloading.
May 21 10:50:01 rockdev systemd[1]: Starting Docker Socket for the API.
May 21 10:50:01 rockdev systemd[1]: Listening on Docker Socket for the API.
May 21 10:50:01 rockdev systemd[1]: Started Docker Application Container Engine.
```
#### Leap 15.1 rc
```
May 21 10:16:57 rockdev systemd[1]: Reloading.
May 21 10:16:57 rockdev systemd[1]: Starting Docker Application Container Engine...
May 21 10:16:58 rockdev kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
May 21 10:16:58 rockdev kernel: Bridge firewalling registered
May 21 10:16:58 rockdev kernel: nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
May 21 10:16:58 rockdev kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
May 21 10:16:58 rockdev kernel: Initializing XFRM netlink socket
May 21 10:16:58 rockdev kernel: Netfilter messages via NETLINK v0.30.
May 21 10:16:58 rockdev kernel: ctnetlink v0.93: registering with nfnetlink.
May 21 10:16:58 rockdev systemd-udevd[29250]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.5767] manager: (docker0): new Bridge device (/org/freedesktop/NetworkManager/Devices/3)
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.6129] device (docker0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.6148] keyfile: add connection in-memory (76d248d1-0646-4b2d-af60-d64db1b236c4,"docker0")
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.6153] device (docker0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.6161] device (docker0): Activation: starting connection 'docker0' (76d248d1-0646-4b2d-af60-d64db1b236c4)
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.6165] device (docker0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.6169] device (docker0): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.6173] device (docker0): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.6174] device (docker0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.6178] device (docker0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.6180] device (docker0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
May 21 10:16:58 rockdev NetworkManager[803]: <info> [1558448218.6189] device (docker0): Activation: successful, device activated.
May 21 10:16:58 rockdev dbus-daemon[709]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.2' (uid=0 pid=803 comm="/usr/sbin/NetworkManager --no-daemon ")
May 21 10:16:58 rockdev kernel: IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
May 21 10:16:58 rockdev systemd[1]: Starting Network Manager Script Dispatcher Service...
May 21 10:16:58 rockdev dbus-daemon[709]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
May 21 10:16:58 rockdev systemd[1]: Started Network Manager Script Dispatcher Service.
May 21 10:16:58 rockdev nm-dispatcher[29277]: req:1 'up' [docker0]: new request (3 scripts)
May 21 10:16:58 rockdev nm-dispatcher[29277]: req:1 'up' [docker0]: start running ordered scripts...
May 21 10:16:58 rockdev systemd[1]: Started Docker Application Container Engine.
```
#### Tumbleweed 20190517
```
May 21 10:30:58 rockdev systemd[1]: Starting Docker Application Container Engine...
May 21 10:30:59 rockdev kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
May 21 10:30:59 rockdev kernel: Bridge firewalling registered
May 21 10:30:59 rockdev kernel: bpfilter: Loaded bpfilter_umh pid 3236
May 21 10:30:59 rockdev kernel: Initializing XFRM netlink socket
May 21 10:30:59 rockdev systemd-udevd[3228]: Using default interface naming scheme 'v240'.
May 21 10:30:59 rockdev NetworkManager[888]: <info> [1558449059.1882] manager: (docker0): new Bridge device (/org/freedesktop/NetworkManager/Devices/3)
May 21 10:30:59 rockdev systemd-udevd[3228]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
May 21 10:30:59 rockdev systemd-udevd[3228]: Could not generate persistent MAC address for docker0: No such file or directory
May 21 10:30:59 rockdev systemd[1]: Started Docker Application Container Engine.
```
### Final docker.service contents
#### CentOS
```
[root@rockdev ~]# cat /etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target docker.socket rockstor-bootstrap.service
Requires=docker.socket
[Service]
ExecStart=/opt/build/bin/docker-wrapper /mnt2/rockons_root
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
[root@rockdev ~]#
```
#### Leap 15.1 rc
```
linux-1pi9:~ # cat /etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target lvm2-monitor.service SuSEfirewall2.service rockstor-bootstrap.service
[Service]
EnvironmentFile=/etc/sysconfig/docker
# While Docker has support for socket activation (-H fd://), this is not
# enabled by default because enabling socket activation means that on boot your
# containers won't start until someone tries to administer the Docker daemon.
Type=notify
NotifyAccess=all
ExecStart=/opt/build/bin/docker-wrapper --add-runtime oci=/usr/sbin/docker-runc $DOCKER_NETWORK_OPTIONS $DOCKER_OPTS /mnt2/rockons_root
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this property.
TasksMax=infinity
# Set delegate yes so that systemd does not reset the cgroups of docker containers
# Only systemd 218 and above support this property.
Delegate=yes
# Kill only the docker process, not all processes in the cgroup.
KillMode=process
# Restart the docker process if it exits prematurely.
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
```
#### Tumbleweed 20190517
```
rockdev:~ # cat /etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target lvm2-monitor.service SuSEfirewall2.service rockstor-bootstrap.service
[Service]
EnvironmentFile=/etc/sysconfig/docker
# While Docker has support for socket activation (-H fd://), this is not
# enabled by default because enabling socket activation means that on boot your
# containers won't start until someone tries to administer the Docker daemon.
Type=notify
NotifyAccess=all
ExecStart=/opt/build/bin/docker-wrapper --add-runtime oci=/usr/sbin/docker-runc $DOCKER_NETWORK_OPTIONS $DOCKER_OPTS /mnt2/rockons_root
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this property.
TasksMax=infinity
# Set delegate yes so that systemd does not reset the cgroups of docker containers
# Only systemd 218 and above support this property.
Delegate=yes
# Kill only the docker process, not all processes in the cgroup.
KillMode=process
# Restart the docker process if it exits prematurely.
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
```
### Shortcomings and potential improvements
Should we test for the existence of the source file (`/usr/lib/systemd/system/docker.service`) and raise an exception if not found? It is my understanding that the docker-ce package will be "shipped" with the Rockstor build so it should always be present, but should we still account for this possibility?
Iām unsure of exactly which docker version will be shipped, but I believe it will simply provided by upstream directly, so either openSUSE Leap 15.1, or Tumbleweedā¦ the latter should be the latest and greatest, I believe. @phillxnet , am I correct with that one?
Yes, thatās personally what helped me at first to get acquainted to docker and its ecosystem. I progressively became more familiar with it and started feeling some limitations that the rock-on system had at the time to get a few more ācustomizedā configurations. I personally do see a very interesting opportunity to provide an easy way for unfamiliar users to set up more complex containers configurations provided some improvements to the current rock-ons framework. On this topic, there has been quite a bit of work done lately and I personally have an upcoming series of rework to implement docker networking into Rockstor. You can read more on this in the issue below and the links therein if interested:
opened 10:18PM - 18 Jan 19 UTC
closed 01:11PM - 22 Dec 20 UTC
This is an issue dedicated to step 3 of the Docker networks (re)work (#1982), coā¦ rresponding to the implementation of an interface for the creation of a docker network.
As discussed in #1982, this would be better integrated in the existing "_System > Network_" part of Rockstor's UI. My current idea would thus be to simply add a new option in the "_Add Connection_" section to create a docker network, as seen below:

In order to follow the same level of customization than what is offered for system connections, we can keep the same configuration method set as "Auto" by default, with the possibility of selecting "Manual" parameters. In the latter case, docker specific fields will appear, corresponding to the options offered by the `docker network create` command.
As per the [docker documentation](https://docs.docker.com/engine/reference/commandline/network_create), these are as follows:
```
--attachable Enable manual container attachment
--aux-address Auxiliary IPv4 or IPv6 addresses used by Network driver
--config-from The network from which copying the configuration
--config-only Create a configuration only network
--driver Driver to manage the Network
--gateway IPv4 or IPv6 Gateway for the master subnet
--ingress Create swarm routing-mesh network
--internal Restrict external access to the network
--ip-range Allocate container ip from a sub-range
--ipam-driver IP Address Management Driver
--ipam-opt Set IPAM driver specific options
--ipv6 Enable IPv6 networking
--label Set metadata on a network
--scope Control the networkās scope
--subnet Subnet in CIDR format that represents a network segment
--opt Set driver specific options:
com.docker.network.bridge.name bridge name to be used when creating the Linux bridge
com.docker.network.bridge.enable_ip_masquerade Enable IP masquerading
com.docker.network.bridge.enable_icc Enable or Disable Inter Container Connectivity
com.docker.network.bridge.host_binding_ipv4 Default IP when binding container ports
com.docker.network.driver.mtu Set the containers network MTU
```
In our case, we would need to support only the `bridge` driver (to begin with, at least), which leaves us with the following parameters:
```
--aux-address Auxiliary IPv4 or IPv6 addresses used by Network driver
--gateway IPv4 or IPv6 Gateway for the master subnet
--internal Restrict external access to the network
--ip-range Allocate container ip from a sub-range
--ipv6 Enable IPv6 networking
--subnet Subnet in CIDR format that represents a network segment
--opt , -o Set driver specific options, as follows:
com.docker.network.bridge.enable_ip_masquerade Enable IP masquerading
com.docker.network.bridge.enable_icc Enable or Disable Inter Container Connectivity
com.docker.network.bridge.host_binding_ipv4 Default IP when binding container ports
com.docker.network.driver.mtu Set the containers network MTU
```
As was also discussed in #1982, we can also allow the users to select one or more existing containers to add to the network at this step, although this may be the subject of a separate issue + PR.
@Luke-Nukem and @phillxnet, as you both participated in this prior discussion, thanks for your feedback on any of the parameters above. I don't think enabling anything IPv6-related would be useful, for instance, as I believe Rockstor's UI does not support it yet, for instance.
As mentioned above, I'm currently working on this issue and should be done hopefully quite soon. There are some elements in this work and the one described in #2003 that depend on a pending PR (#1999), however, so I'll continue working on these and refining them until then.
For this reason, I am particularly looking forward to hearing more about the following:
Was there something in particular that you would like to see or not to see? As you can probably guess from above, Iām interested in getting as much feedback as I can and getting as many suggestions for improvement as possible, so your experience seems to be very helpful for meāpending youād be willing to share the gist of it.
Welcome again to the community!