I made this tutorial because I had a lot of problems with dependencies versions, python versions, GPU CUDA for Tensorflow 2 in linux system before I found docker can be the solution.
As the effect of my struggles I built own docker image with dependencies inside:
tensorflow 2 GPU
How to use it? In short, your system downloads docker image, creates container, runs code in it and gives the results
If You have no eperience with dockercan think about docker container as:
-some kind of virtual machine with all usefull dependencies inside (in factdocker isn’t VM but ) or
-application with input argument and output directory for results or
-environment for runing python scripts
In the firs phase it’s not important how to understand it but how to use it.
If You want use docker You have to install it:
For Ubuntu type in console:
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
For Windows download and run installer from: Install Docker Desktop on Windows | Docker Documentation
Downloading the image, type in console:
docker pull peterpirogtf/ray_tf2
You can try of course with official builds to use or build own image:
docker pull rayproject/ray:latest-gpu
Run docker image for simple rllib example:
docker run -it peterpirogtf/ray_tf2 rllib train --run=PPO --env=CartPole-v0
-it option to communicate with docker by console (input and output)
Options are described here: https://docs.docker.com/engine/reference/commandline/run/
If everything is correct, You can see something like this:
In this configuration your docker container can’t communicate with directories and files outside container and network.
I hope I will write about useful docker run options later.
My docker image build is rather big but the base was tensorflow2-gpu container to avoid GPU problems.