Ray 2.0.0 in Docker Compose

I’ve been trying to run Ray 2.0.0 in Docker Compose on MacOs.
Here is my docker-compose.yaml configuration file:

version: "3"

services:

  ray-head:
    image: ${RAY_IMAGE}
    platform: linux/amd64

    ports:
      - "${REDISPORT}:${REDISPORT}"
      - "${DASHBOARDPORT}:${DASHBOARDPORT}"
      - "${HEADNODEPORT}:${HEADNODEPORT}"
      - 22346:22346
      - 22345:22345
    env_file:
      - .env
    command: >
      ray start -v --head 
      --temp-dir=${RAY_LOGS}
      --port=${REDISPORT} 
      --redis-shard-ports=6380,6381 
      --object-manager-port=22345 
      --node-manager-port=22346 
      --dashboard-host=0.0.0.0
      --dashboard-port=${DASHBOARDPORT}
      --num-cpus=4
      --num-gpus=0
      --block
      --storage=${RAY_LOGS}

    volumes:
      # Mount the input logs
      - ${RAY_LOGS}:${RAY_LOGS}

and the .env file:

HOST=localhost
RAY_IMAGE=rayproject/ray:2.0.0-py39-cu116
REDISPORT=7734
DASHBOARDPORT=8265
HEADNODEPORT=10001
REDISPASSWORD=your-password
NUM_WORKERS=2
NUM_CPU_WORKER=1

RAY_LOGS=${HOME}/ray
OUTPUT_LOGS=${HOME}/log

While running docker compose I get a following traceback:

ray-head_1  | OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
orq_debugger_1 exited with code 0
ray-head_1  | 2022-09-29 10:19:38,609	INFO usage_lib.py:478 -- Usage stats collection is enabled by default without user confirmation because this terminal is detected to be non-interactive. To disable this, add `--disable-usage-stats` to the command that starts the cluster, or run the following command: `ray disable-usage-stats` before starting the cluster. See https://docs.ray.io/en/master/cluster/usage-stats.html for more details.
ray-head_1  | 2022-09-29 10:19:38,609	INFO scripts.py:719 -- Local node IP: 172.30.0.3
ray-head_1  | Traceback (most recent call last):
ray-head_1  |   File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/node.py", line 312, in __init__
ray-head_1  |     ray._private.services.wait_for_node(
ray-head_1  |   File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/services.py", line 385, in wait_for_node
ray-head_1  |     raise TimeoutError("Timed out while waiting for node to startup.")
ray-head_1  | TimeoutError: Timed out while waiting for node to startup.
ray-head_1  | 
ray-head_1  | During handling of the above exception, another exception occurred:
ray-head_1  | 
ray-head_1  | Traceback (most recent call last):
ray-head_1  |   File "/home/ray/anaconda3/bin/ray", line 8, in <module>
ray-head_1  |     sys.exit(main())
ray-head_1  |   File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/scripts/scripts.py", line 2588, in main
ray-head_1  |     return cli()
ray-head_1  |   File "/home/ray/anaconda3/lib/python3.9/site-packages/click/core.py", line 1128, in __call__
ray-head_1  |     return self.main(*args, **kwargs)
ray-head_1  |   File "/home/ray/anaconda3/lib/python3.9/site-packages/click/core.py", line 1053, in main
ray-head_1  |     rv = self.invoke(ctx)
ray-head_1  |   File "/home/ray/anaconda3/lib/python3.9/site-packages/click/core.py", line 1659, in invoke
ray-head_1  |     return _process_result(sub_ctx.command.invoke(sub_ctx))
ray-head_1  |   File "/home/ray/anaconda3/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
ray-head_1  |     return ctx.invoke(self.callback, **ctx.params)
ray-head_1  |   File "/home/ray/anaconda3/lib/python3.9/site-packages/click/core.py", line 754, in invoke
ray-head_1  |     return __callback(*args, **kwargs)
ray-head_1  |   File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/autoscaler/_private/cli_logger.py", line 852, in wrapper
ray-head_1  |     return f(*args, **kwargs)
ray-head_1  |   File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/scripts/scripts.py", line 746, in start
ray-head_1  |     node = ray._private.node.Node(
ray-head_1  |   File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/node.py", line 319, in __init__
ray-head_1  |     raise Exception(
ray-head_1  | Exception: The current node has not been updated within 30 seconds, this could happen because of some of the Ray processes failed to startup.

Does anybody know what might be a cause of this issue?

Hey @Kordi1818 - sorry for the late reply.

Do you have access to the ray logs, in particular log such as gcs_server.out or raylet.out on your RAY_LOGS=${HOME}/ray?