Spread trials evenly with fractional gpu resources

How severe does this issue affect your experience of using Ray?

  • Medium: It contributes to significant difficulty to complete my task, but I can work around it.

I’m using tune.run with resources_per_trial=dict(gpu=0.2) on a cluster with 2 GPUs. Unfortunately, ray places the first five trials on the same GPU. I would like to spread the trials evenly on both GPUs. And I’m not setting the required fractional gpu higher because I don’t know how many trials I’ll launch in advance. Is it possible to do this?

Is it possible to change the resource per trial fractional GPU requirement so that you get the desired spread? (ex: "GPU": 0.3 to spread 5 trials onto 2 GPUs)

Note that the GPU resource allocation is purely logical and is used to assign the correct GPU ids to workers. You still need to limit the GPU utilization in your user code.

That works if I know in advance how many trials I want to run. But usually I don’t know that before starting the first trials and would like ray to automatically spread the trials evenly.