Ray workers can't ssh to head node

I was trying earlier to setup a cluster manually for horovod but it won’t work properly. I am now trying to set it up with cluster launcher on GCP. I created a IAM with owner priviledges and added it to the path. I am using this yaml:

# An unique identifier for the head node and workers of this cluster.
cluster_name: default

# The maximum number of workers nodes to launch in addition to the head
# node.
max_workers: 2

# The autoscaler will scale up the cluster faster with higher upscaling speed.
# E.g., if the task requires adding more nodes then autoscaler will gradually
# scale up the cluster in chunks of upscaling_speed*currently_running_nodes.
# This number should be > 0.
# upscaling_speed: 1.0

# This executes all commands on all nodes in the docker container,
# and opens all the necessary ports to support the Ray cluster.
# Empty string means disabled.
docker:
  # image: "rayproject/ray-ml:latest-cpu" # You can change this to latest-cpu if you don't need GPU support and want a faster startup
    # image: rayproject/ray:latest-gpu   # use this one if you don't need ML dependencies, it's faster to pull
  # container_name: "ray_container"
  # If true, pulls latest version of image. Otherwise, `docker run` will only pull the image
  # if no cached version is present.
  pull_before_run: True
  run_options:  # Extra options to pass into "docker run"
    - --ulimit nofile=65536:65536

  # Example of running a GPU head with CPU workers
  head_image: "rayproject/ray:latest-cpu"
  container_name: "ray_head"
  # Allow Ray to automatically detect GPUs

  worker_image: "rayproject/ray-ml:latest-cpu"
  container_name: "ray_worker"
  # worker_run_options: []

# If a node is idle for this many minutes, it will be removed.

# Cloud-provider specific configuration.
provider:
    type: gcp
    region: us-west4
    availability_zone: us-west4-a
    project_id: learning-orchestra-347022 # Globally unique project id

# How Ray will authenticate with newly launched nodes.
auth:
    ssh_user: ubuntu
# By default Ray creates a new private keypair, but you can also use your own.
# If you do so, make sure to also set "KeyName" in the head and worker node
# configurations below. This requires that you have added the key into the
# project wide meta-data.
#    ssh_private_key: /path/to/your/key.pem

# Tell the autoscaler the allowed node types and the resources they provide.
# The key is the name of the node type, which is just for debugging purposes.
# The node config specifies the launch config and physical instance type.
available_node_types:
    ray_head_default:
        # The minimum number of worker nodes of this type to launch.
        # This number should be >= 0.
        min_workers: 0
        # The maximum number of worker nodes of this type to launch.
        # This takes precedence over min_workers.
        max_workers: 0
        # The resources provided by this node type.
        resources: {"CPU": 0}
        # Provider-specific config for the head node, e.g. instance type. By default
        # Ray will auto-configure unspecified fields such as subnets and ssh-keys.
        # For more documentation on available fields, see:
        # https://cloud.google.com/compute/docs/reference/rest/v1/instances/insert
        node_config:
            machineType: n1-standard-2
            disks:
              - boot: true
                autoDelete: true
                type: PERSISTENT
                initializeParams:
                  diskSizeGb: 50
                  # See https://cloud.google.com/compute/docs/images for more images
                  sourceImage: projects/deeplearning-platform-release/global/images/family/common-cpu

            # Additional options can be found in in the compute docs at
            # https://cloud.google.com/compute/docs/reference/rest/v1/instances/insert

            # If the network interface is specified as below in both head and worker
            # nodes, the manual network config is used.  Otherwise an existing subnet is
            # used.  To use a shared subnet, ask the subnet owner to grant permission
            # for 'compute.subnetworks.use' to the ray autoscaler account...
            # networkInterfaces:
            #   - kind: compute#networkInterface
            #     subnetwork: path/to/subnet
            #     aliasIpRanges: []
    ray_worker_small:
        # The minimum number of worker nodes of this type to launch.
        # This number should be >= 0.
        min_workers: 2
        # The maximum number of worker nodes of this type to launch.
        # This takes precedence over min_workers.
        max_workers: 2
        # The resources provided by this node type.
        resources: {"CPU": 1}
        # Provider-specific config for the head node, e.g. instance type. By default
        # Ray will auto-configure unspecified fields such as subnets and ssh-keys.
        # For more documentation on available fields, see:
        # https://cloud.google.com/compute/docs/reference/rest/v1/instances/insert
        node_config:
            machineType: n1-standard-2
            disks:
              - boot: true
                autoDelete: true
                type: PERSISTENT
                initializeParams:
                  diskSizeGb: 50
                  # See https://cloud.google.com/compute/docs/images for more images
                  sourceImage: projects/deeplearning-platform-release/global/images/family/common-cpu
            # Run workers on preemtible instance by default.
            # Comment this out to use on-demand.
            # scheduling:
              # - preemptible: true
            # Un-Comment this to launch workers with the Service Account of the Head Node
            # serviceAccounts:
            # - email: ray-autoscaler-sa-v1@<project_id>.iam.gserviceaccount.com
            #   scopes:
            #   - https://www.googleapis.com/auth/cloud-platform

    # Additional options can be found in in the compute docs at
    # https://cloud.google.com/compute/docs/reference/rest/v1/instances/insert

# Specify the node type of the head node (as configured above).
head_node_type: ray_head_default
worker_default_node_type: ray_worker_small

# Files or directories to copy to the head and worker nodes. The format is a
# dictionary from REMOTE_PATH: LOCAL_PATH, e.g.
file_mounts: {
#    "/path1/on/remote/machine": "/path1/on/local/machine",
#    "/path2/on/remote/machine": "/path2/on/local/machine",
}

# Files or directories to copy from the head node to the worker nodes. The format is a
# list of paths. The same path on the head node will be copied to the worker node.
# This behavior is a subset of the file_mounts behavior. In the vast majority of cases
# you should just use file_mounts. Only use this if you know what you're doing!
cluster_synced_files: []

# Whether changes to directories in file_mounts or cluster_synced_files in the head node
# should sync to the worker node continuously
file_mounts_sync_continuously: False

# Patterns for files to exclude when running rsync up or rsync down
rsync_exclude:
    - "**/.git"
    - "**/.git/**"

# Pattern files to use for filtering out files when running rsync up or rsync down. The file is searched for
# in the source directory and recursively through all subdirectories. For example, if .gitignore is provided
# as a value, the behavior will match git's behavior for finding and using .gitignore files.
rsync_filter:
    - ".gitignore"

# List of commands that will be run before `setup_commands`. If docker is
# enabled, these commands will run outside the container and before docker
# is setup.
initialization_commands: []

# List of shell commands to run to set up nodes.
setup_commands: []
    # Note: if you're developing Ray, you probably want to create a Docker image that
    # has your Ray repo pre-cloned. Then, you can replace the pip installs
    # below with a git checkout <your_sha> (and possibly a recompile).
    # To run the nightly version of ray (as opposed to the latest), either use a rayproject docker image
    # that has the "nightly" (e.g. "rayproject/ray-ml:nightly-gpu") or uncomment the following line:
    # - pip install -U "ray[default] @ https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-2.0.0.dev0-cp37-cp37m-manylinux2014_x86_64.whl"


# Custom commands that will be run on the head node after common setup.
head_setup_commands:
  - pip install google-api-python-client==1.7.8
  - sudo apt-get update
  - sudo apt-get -y install build-essential
  - sudo apt-get -y install cmake gcc g++
  - HOROVOD_WITH_GLOO=1 HOROVOD_WITH_TENSORFLOW=1 pip install 'horovod[ray,tensorflow,keras]'

# Custom commands that will be run on worker nodes after common setup.
worker_setup_commands:
  - sudo apt-get update
  - sudo apt-get -y install build-essential
  - sudo apt-get -y install cmake gcc g++
  - HOROVOD_WITH_GLOO=1 HOROVOD_WITH_TENSORFLOW=1 pip install 'horovod[ray,tensorflow,keras]'

# Command to start ray on the head node. You don't need to change this.
head_start_ray_commands:
    - ray stop
    - >-
      ray start
      --head
      --port=6379
      --object-manager-port=8076
      --autoscaling-config=~/ray_bootstrap_config.yaml
# Command to start ray on worker nodes. You don't need to change this.
worker_start_ray_commands:
    - ray stop
    - >-
      ray start
      --address=$RAY_HEAD_IP:6379
      --object-manager-port=8076
head_node: {}
worker_nodes: {}

am i missing anything?

what is ur error messages?

The head node works properly but the worker node presents the following error:

==> /tmp/ray/session_latest/logs/monitor.out <==
2022-06-09 06:42:15,219 VINFO command_runner.py:552 – Running docker run --rm --name ray_worker -d -it -e LC_ALL=C.UTF-8 -e LANG=C.UTF-8 --ulimit nofile=65536:65536 --shm-size='5261949665.280001b' --net=host rayproject/ray-ml:latest-cpu bash
2022-06-09 06:42:15,219 VVINFO command_runner.py:555 – Full command is ssh -tt -i ~/ray_bootstrap_key.pem -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 -o ControlMaster=auto -o ControlPath=/tmp/ray_ssh_070dd72385/c21f969b5f/%C -o ControlPersist=10s -o ConnectTimeout=120s ubuntu@10.182.0.42 bash --login -c -i 'true && source ~/.bashrc && export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && (docker run --rm --name ray_worker -d -it -e LC_ALL=C.UTF-8 -e LANG=C.UTF-8 --ulimit nofile=65536:65536 --shm-size='"'"'5261949665.280001b'"'"' --net=host rayproject/ray-ml:latest-cpu bash)'

==> /tmp/ray/session_latest/logs/monitor.log <==
2022-06-09 06:42:17,591 INFO discovery.py:873 – URL being requested: GET https://compute.googleapis.com/compute/v1/projects/learning-orchestra-347022/zones/us-west4-a/operations/operation-1654782131775-5e103fd2527f2-c1f48172-da774689?alt=json
2022-06-09 06:42:17,883 INFO node.py:331 – wait_for_compute_zone_operation: Operation operation-1654782131775-5e103fd2527f2-c1f48172-da774689 finished.
2022-06-09 06:42:17,893 INFO discovery.py:873 – URL being requested: GET https://compute.googleapis.com/compute/v1/projects/learning-orchestra-347022/zones/us-west4-a/instances?filter=((status+%3D+RUNNING)+OR+(status+%3D+PROVISIONING)+OR+(status+%3D+STAGING))+AND+(labels.ray-cluster-name+%3D+default)&alt=json

==> /tmp/ray/session_latest/logs/monitor.out <==
2022-06-09 06:42:17,884 ERR updater.py:157 – New status: update-failed
2022-06-09 06:42:17,889 ERR updater.py:159 – !!!
2022-06-09 06:42:17,894 VERR updater.py:167 – {‘message’: ‘SSH command failed.’}
2022-06-09 06:42:17,895 ERR updater.py:169 – SSH command failed.
2022-06-09 06:42:17,895 ERR updater.py:171 – !!!

@Camazaqu we’d like to help on this one. But next time, if you can post on the right topic (this should belong to Ray Clusters), we’ll cover that in a shorter time.

@Alex could you help check this one?

I noticed that after i posted, won’t happen next time.

1 Like

Just found this post.
@cade should take over at this point.

One thing I notice is that there appear to be some errors from worker startup which are swallowed.
There was a recent fix for this (flush command runner stdout) but the fix did not make it to the most recent Ray release (1.13.0).

You might get clearer error output if you deploy the master version of Ray in the cluster, either using a nightly ray image or the relevant pip install command.
https://docs.ray.io/en/latest/ray-overview/installation.html#docker-source-images
https://docs.ray.io/en/latest/ray-overview/installation.html#daily-releases-nightlies

1 Like