I’m using Ray on a cluster with a variable number of GPUs per node. I want to run a task on each node, and let it consume all the GPUs on that node. I have defined a custom node resource to make sure tasks are not ran in parallel on the same node. However, if I don’t set num_gpus for the remote function, Ray sets CUDA_VISIBLE_DEVICES to an empty string. So I’m forced to proved some num_gpus value, which leaves some nodes underutilized.
Could someone with any of these questions:
Can I specify a flexible number of GPUs?
or
How to stop Ray from editing CUDA_VISIBLE_DEVICES?
I think we have the RAY_EXPERIMENTAL_NOSET_CUDA_VISIBLE_DEVICES env var for that (set it to a non-null value in both the driver code and the runtime env)
Thanks @Yard1 and @cade for your suggestions. @Architect Let us know if either of the suggestions worked for you to disable cuda visibility of devices.
Hi all, thanks for your help. Indeed I ended up manually managing CUDA_VISIBLE_DEVICES. Did not know about RAY_EXPERIMENTAL_NOSET_CUDA_VISIBLE_DEVICES, this could have helped.