Error Scaling Ray Serve to 2 Replicas

As for @Dmitri 's method, here’s the status right after I tried running 5 replicas:

➜ ray git:(releases/1.4.1) ✗ kubectl -n ray exec example-cluster-ray-head-j2bqx – ray status

======== Autoscaler status: 2021-08-10 16:41:14.985306 ========

Node status


Healthy:

1 head_node

Pending:

(no pending nodes)

Recent failures:

(no failures)

Resources


Usage:

4.0/4.0 CPU

0.0/1.0 CPU_group_0_1a05a614d31404430b219ade936e9e03

0.0/1.0 CPU_group_0_68f029c34779e27ecdc0ab25c8c5c6ea

0.0/1.0 CPU_group_0_a56f2612e58b97d6310a9db96c6aa948

0.0/1.0 CPU_group_0_ea8b9e5878f0d39ea69a6cea41f1d919

1.0/1.0 CPU_group_1a05a614d31404430b219ade936e9e03

1.0/1.0 CPU_group_68f029c34779e27ecdc0ab25c8c5c6ea

1.0/1.0 CPU_group_a56f2612e58b97d6310a9db96c6aa948

1.0/1.0 CPU_group_ea8b9e5878f0d39ea69a6cea41f1d919

0.00/1.400 GiB memory

0.00/0.585 GiB object_store_memory

Demands:

{‘CPU_group_41c2a9cef5e5273468fe536c57e34ffa’: 1.0}: 1+ pending tasks/actors

{‘CPU’: 1.0} * 1 (PACK): 1+ pending placement groups

Although I did run the ray helm chart to set it up, running “kubectl -n ray logs example-cluster-ray-head-j2bqx” showed nothing. Did I get that command right?