Autoscaling in minikube does not work?

Hello, I have a minikube env which I have started with below resources as mentioned in the documentation:

minikube start --cpus=6 --memory=“4G”

I try to launch 150 tasks on the ray head node as below:

@ray.remote
… def f():
… time.sleep(1)

and I run the above function by executing the above function on the command line of head node as -

result = ray.get([f.remote() for _ in range(150)])

I was hoping to see two workers spawned to execute tasks but I do not see two workers being spawned:

Below are the logs:

Usage:

1.0/1.0 CPU

0.00/0.271 GiB memory

0.00/0.135 GiB object_store_memory

Demands:

{‘CPU’: 1.0}: 148+ pending tasks/actors

2021-04-16 06:35:50,622 WARNING resource_demand_scheduler.py:711 – The autoscaler could not find a node type to satisfy the request: [{‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}, {‘CPU’: 1.0}]. If this request is related to placement groups the resource request will resolve itself, otherwise please specify a node type with the necessary resource https://docs.ray.io/en/master/cluster/autoscaling.html#multiple-node-type-autoscaling.

2021-04-16 06:35:50,656 INFO autoscaler.py:325 –

======== Autoscaler status: 2021-04-16 06:35:50.656820 ========

Node status


Healthy:

1 head_node

Pending:

(no pending nodes)

Recent failures:

(no failures)

Resources


Usage:

1.0/1.0 CPU

0.00/0.271 GiB memory

0.00/0.135 GiB object_store_memory

Demands:

{‘CPU’: 1.0}: 143+ pending tasks/actors

can you please comment why workers are not being spawned as a part of autoscaling in the current setup while running commands from the head node?

thanks

Sorry missed the cluster launch command , I used the below command:

ray up ray/python/ray/autoscaler/kubernetes/example-full.yaml

As mentioned in the documentation here:

https://docs.ray.io/en/master/cluster/kubernetes.html

cc @ijrsvt Can you take a look at this?

This looks like a bug in Ray 1.2.0 that was resolved here: [autoscaler][kubernetes] autoscaling hotfix by DmitriGekhtman · Pull Request #14024 · ray-project/ray · GitHub

I’d recommend using a more recent master version of Ray for Kubernetes features.