Hi,
I have 3 docker containers with each container consisting a deep learning model (tensorflow).
in each docker container I have a job of batch inference over 1000 images lets say.
I have create batch inference actor and its parallelising the batches over multiple cpus.
which scenario will be most suited for the above task.
each docker container will have separate ray cluster (in each container python code)
ray cluster will be configured locally and each docker container will connect to it.
ray cluster will be configured as separate docker container and each docker container will connect to it.
one of the docker out of three will consist ray and three of the containers will connect it.
It is recommended to have the same dependencies on all head & worker nodes. Alternatively, you can use runtime environment to sync dependencies Environment Dependencies — Ray 3.0.0.dev0