Hi there,
I am trying to deploy ray serve for my machine learning models (tensorflow + pytorch), using docker.
I have three questions:
- First of all, is there any docker image for ray serve or is it OK just to use a ray image and add code into it for the server ?
- In the documentation it tells me that after running
./build-docker.sh
I should get the following images:ray-project/deploy
ray-project/examples
andray-project/base-deps
but I don’t have these images (see below), why ? I have other images prefixedrayproject
and there is no deploy image. In consequence, the following test command does not work:python -m pytest -v test/mini_test.py # This tests some basic functionality.
- When I run
ray.init()
from within the container, it tells me that there is not enough shared memory available (see below). I gave it 1GB just for test, but I was surprised that ray suggests me to use 40GB instead ! Is it mandatory to have 40GB available just to get correct results/ inference performance?
Images created:
- rayproject/ray
- rayproject/ray-deps
- rayproject/base-deps
Output of ray.init()
:
WARNING services.py:1656 -- WARNING: The object store is using /tmp instead of /dev/shm because /dev/shm has only 67108864 bytes available. This will harm performance! You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you can increase /dev/shm size by passing '--shm-size=41.47gb' to 'docker run' (or add it to the run_options list in a Ray cluster config). Make sure to set this to more than 30% of available RAM.
Thank you very much for any help !
Edit: I have a fourth question now: Building the image myself, I cannot use the GPUs. Apparently, the doc says that there is an image rayproject/ray-ml
for that. I will try it out. But then, why recommending to bulid the image ourselves if there is a ready-to-use image on docker hub?