HandBrake running in Docker Container

I came across with a really good real application for this Roscktor server. The ability to host my video production files in a hardware RAID but also to help me create some automated tasks. Like making my proxy intermediate files for editing. There is a lot of it already made with a solution call HandBrake. An open-source video transcoder available for Linux. I bet this solution will make this Rockstor very popular in the small to medium size multimedia content creators like all Youtubers out there around the globe.

I know that there is a Docker implementation of the Plex Rock-On but it will be a good a idea and solution if w front panel can be created to operate with hot-source-folder and hot-output-folder. A codec output definition as well as some other basic parameter like renaming files and watermarks-ing.

I there anyone willing to work with me on this approach. I can well provide a testing scenario as well to contribute on the process and know-how in the video pos-production workflow.

This solution will easily compete to Adobe Media Encoder. An slow expensive app that won’t came out the the Adobe Creative subscription as you can’t buy it along.

Hi there @roberto0610,

I don’t really have experience with video editing workflows, but as you mention Handbrake and another rock-on, would the following docker container (by the reputable jlesage) fit your need(s)? I only read the brief description of this image but it seems to include the “watched folder(s)” scenario you listed:

This is a Docker container for HandBrake.

The GUI of the application is accessed through a modern web browser (no installation or configuration needed on client side) or via any VNC client.

A fully automated mode is also available: drop files into a watch folder and let HandBrake process them without any user interaction.


If yes, there’s probably a way to make a rock-on for that.

1 Like

There is also a fork of the above docker container jlesage/handbrakethat supports encoding via Nvidia (instead of Intel’s QuickSync): djaydev/handbrake


That’s a nice image, thanks for sharing!
I do notice it requires the nvidia runtime (unsurprisingly), so I’ll link a recent relevant post from @criddle who was interested in using the nvidia runtime for plex (for instance) in case that helps:

@Flox, @Hooverdan, @criddle Thank You so much for HB recommendation. I got docker install really easy. Typing:
# yum install docker -y
There after I Installed the “jlesage/handbrake” << As you recommend [After doing an exhaustive reading in google many blogs. Also videos on youtube. I never work with Docker before tho… I just knew about it].

Installing the container:
# docker run jlesage/handbrake
Then I run this command with some real Directories to my server:

docker run -d
--name=handbrake \
-p 5800:5800 \
-v /mnt2/8x240GB_Temper/8x240_ssd/HB_Proxy:/config:rw \
-v $HOME:/mnt2/8x240GB_Temper/8x240_ssd/:ro \
-v $HOME/mnt2/8x240GB_Temper/8x240_ssd/HB_Proxy/HB_Proxy_IN:/watch:rw \
-v $HOME/mnt2/8x240GB_Temper/8x240_ssd/HB_Proxy/HB_Proxy_OUT:/output:rw \

But I’m getting Permission issues now.
I very know how to use linux. I think that my issues is simple now.
While someone come over to help me out. Let me share this 3 screen shots
1.- Showing the HandBrake running on a docker within my Rockstor box.
2.- Showing the error permission message.
3.- Screen shot from the activity windows. (I can see that is searching for nVidia GPU)

Side Note. I’m also installing an nVidia GPU GTX1070.

Can someone give me an advice on how to map my directory paths?
I’m willing to use the same paths as the shared folder created in Rockstor so I can use my Win10 computers to send/pull files to be trans-code directly in-to the hot-folders .

I’m struggling with some user permissions.
HandBrake is running also nVidia GPU is running but I can’t access my shared Samba folders from withing the same rockstor.box.

Comman line Instrucction
docker run -d --name=handbrake -p 5800:5800 -v /mnt2/8x240GB_Temper/8x240_ssd/HB_Proxy:/config:rw -v $HOME:/mnt2/8x240GB_Temper/8x240_ssd/:ro -v $HOME/8x240_ssd/HB_Proxy/HB_Proxy_IN:/watch:rw -v $HOME/8x240_ssd/HB_Proxy/HB_Proxy_OUT:/output:rw jlesage/handbrake

This instruction was taken as example from this post. Thank you @Flox

Hi @roberto0610,

Sorry I didn’t get back to you earlier; there are actually a few different things I wanted to mention in my answer and needed to make myself a little more familiar with this image before posting. I’m still not 100% sure of its inner working, but we should be able to get you closer to your goal(s) at least.

This step was unnecessary as docker is already installed on Rockstor. It is indeed what Rock-ons use to function (see Rockstor’s documentation for more information). To the best of my understanding, the yum command you list shouldn’t actually have installed anything actually.

Handbrake does seem like an interesting target to create a Rock-on so it would probably be preferred to make one for it (feel free to create a corresponding issue on the rockon-registry if you’d like). In the meantime, we can try to get you up and running with a “bare” docker run way.

Before going any further, however, @Hooverdan made a very good point below:

Indeed, it appears your CPU does not support QuickSync, and given you have an Nvidia card, running the image mentioned by @Hooverdan would be interesting. Please note, however, that the latter has some relatively involved requirements–namely the nvidia runtime–so the performance gain may or may not be worth it. As a result, I would lean towards using the jlesage image first at least.

Now, let’s actually look at how to run it. Please keep in mind that I’ll try to write a rock-on for it once we have a better grasp of the validity of each option. I do see a few oddities in the paths you wrote and given your requirement below…

… I would use the following:

  1. Create all shares you need from Rockstor’s webUI. You will thus have all permission controls, snapshots, samba/nfs exports easily accessible via the UI. I would create one share for the following (you can of course give them any name you prefer):
  • config
  • storage
  • watch
  • output
  1. (optional) use Rockstor’s webUI to change the owner of these shares to the user of your choice (better than the default root): use the “Access Control” tab on the detailed page of a share.

  2. If you followed step 2, get the UID / GID of the user who is now the owner of these shares: go to System > Users to see this information. Alternatively, you can simply use the command line id <user_name>.

  3. We can now run the docker run command. Note that you need to use the name you used in step 1 for these. In my case, it would thus be:

docker run -d \
--name=handbrake \
-p 5800:5800 \
-v /mnt2/config:/config:rw \
-v /mnt2/storage:/storage:ro \
-v /mnt2/watch:/watch:rw \
-v /mnt2/output:/output:rw \
-e USER_ID=<UID from step 3> \
-e GROUP_ID=<GID from step 3> \

Note the addition of the USER_ID and GROUP_ID environment variables. As mentioned on the image’s documentation, these default to 1000, which may explain the permission error you were having.

As long as your watch folder is exported via SAMBA, you should be fine.

I hope this helps, and let us know how it goes!


I got the HandBrake Docker running now.

docker run -d \
--name=handbrake \
-p 5800:5800 \
-v /mnt2/Prores_Proxy:/config:rw \
-v /mnt2/Prores_Proxy:/storage:ro \
-v /mnt2/Prores_Proxy/Proxy_In:/watch:rw \
-v /mnt2/Prores_Proxy/Proxy_Out:/output:rw \
-e USER_ID=1000 \
-e GROUP_ID=1000 \

I was able to convert video at a good rate of conversion. I reviewed the videos. This videos are perfect. No glitch or artifacts, relay good. I liked the way it perform. -By the way there is no ProRes codec preset. I’m investigating on have HandBrake to trans-code to ProRes. Any help is much appreciated. Thank you all.

My only concern was related to multi-tread use. I wonder if the resource for this Docker Container could be increased?

Now I need help also on installing the nVidia GTX 1060 and how to execute the Docker comand in order to use the GPU performance and multicore to get my encoding a bit faster. My current server it’s an old Dell PowerEdge R910 with 4x CPU 10core each and 512GB on ram. 8x240gb SSD as RAID0 drive and 1x Dual channel Infiniband 40GB network in full operation.
I’m trying this command and I get this error while executing.

nVidia Hardware test.
# lspci | grep -i nvidia
04:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)
04:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)

Docker with GPU support command (Testing version - Stilling not running) may be missing driver???

docker run -d \
--name=handbrake \
-p 5800:5800 \
--runtime=nvidia \
-v /mnt2/Prores_Proxy:/config:rw \
-v /mnt2/Prores_Proxy:/storage:ro \
-v /mnt2/Prores_Proxy/Proxy_In:/watch:rw \
-v /mnt2/Prores_Proxy/Proxy_Out:/output:rw \
-e USER_ID=1000 \
-e GROUP_ID=1000 \

But this command is kiking me out with this message:
docker: Error response from daemon: Unknown runtime specified nvidia.
See 'docker run --help'.

I got no idea where to go from here.

Happy to read you got it working! Let’s try to see if we can get closer to what you wantL

By default, docker uses pretty much all the resources from the host than it can. See the excerpt from the docker’s documentation below:

By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler will allow.

Your handbrake container should thus use all it can for video encoding/decoding. Have you kept an eye on the CPU usage while you were converting videos? A simple top / htop should be enough for you to see a big increase in CPU usage, but if you want more information, you can have a look at the Netdata rock-on: it gives you a lot of information about your system usage (borderline too much, but can prove quite useful at times).

That one is a little more complicated, but luckily you’ll find good pointers to at least get you moving forward. The difficulty resides in the fact that in order to use your GPU, the docker container needs to be ran with a different runtime than the default–it is this process that could prove relatively involved. Have a look at the NVIDIA runtime brief introduction below:

The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs. Full documentation and frequently asked questions are available on the repository wiki.

To install it, I would refer you to the link I posted above (re-pasted below) in which I detailed my notes on part of the process (as I do not have an NVIDIA GPU I couldn’t test the procedure all the way):

These were tested after @criddle was looking for taking advantage of an NVIDIA GPU for similar tasks so maybe he might have some further feedback on the process.

As a reminder, this would be working only if you replace the jlesage/handbrake image by the one linked by @Hooverdan above: djaydev/handbrake .

Hope this helps get you moving forward…

1 Like

Just got this moving and posted an update on my original thread.
Maybe it can point you in the right direction - that said it’s ugly and has come caveats, so if your solution is currently working for you then I would probably stick with it.

If you want to tinker and possibly dig a hole you one day have to figure a way out of (like me :rofl:), then by all means proceed:

Good luck!