How do I ask Ray to autoscale the resources for tuning?

I’m using Ray tune.run and would like to know if there is a way to autoscale the resources available for tuning instead of specifically mentioning.
Thanks!

resources_per_trial (dict): Machine resources to allocate per trial,  
        e.g. `{"cpu": 64, "gpu": 8}`. Note that GPUs will not be  
        assigned unless you specify them here. Defaults to 1 CPU and 0  
        GPUs in `Trainable.default_resource_request()`.

@kai
Any help is appreciated!
Thanks!

What do you mean by autoscale the resources available here?

It says that it uses only 1 CPU. How can make it use as many as available without explicitly saying
resources_per_trial :{"cpu": 10, "gpu": 0} ?

FYI, I’m using SLURM to submit jobs.

This is 1 CPU per function evaluation (trial). Tune uses all the available CPUs available on your cluster to run parallel function evaluations.

So for example, if you had a 500 CPU cluster, you’ll see 500 parallel function evaluations (each takes 1 CPU).

Thanks!

If I had 500 CPUs (requested through SLURM) and want to run only 10 trails, is there a way that RAY can divide the available CPUs to each trail?

What’s the number of CPUs available per machine? If it is 64, you can only allocate at most 64 CPUs per trial.

Keep in mind, more CPUs per trial doesn’t necessarily translate to faster execution.

I think there are 24 CPUs per core.
Thanks for the suggestion!