How to stop Ray from managing CUDA_VISIBLE_DEVICES?

I’m using Ray on a cluster with a variable number of GPUs per node. I want to run a task on each node, and let it consume all the GPUs on that node. I have defined a custom node resource to make sure tasks are not ran in parallel on the same node. However, if I don’t set num_gpus for the remote function, Ray sets CUDA_VISIBLE_DEVICES to an empty string. So I’m forced to proved some num_gpus value, which leaves some nodes underutilized.
Could someone with any of these questions:

  • Can I specify a flexible number of GPUs?
    or
  • How to stop Ray from editing CUDA_VISIBLE_DEVICES?

Thank you for your help!

Interesting case of resource requests on non-homogonous cluster cc @jjyao

@Architect not sure what you ended up going with but one workaround is to unset CUDA_VISIBLE_DEVICES at the beginning of task execution.

import os

@ray.remote
def task():
    del os.environ['CUDA_VISIBLE_DEVICES']
    # remaining code that uses all available GPUs on the node

I think we have the RAY_EXPERIMENTAL_NOSET_CUDA_VISIBLE_DEVICES env var for that (set it to a non-null value in both the driver code and the runtime env)

1 Like

Thanks @Yard1 and @cade for your suggestions. @Architect Let us know if either of the suggestions worked for you to disable cuda visibility of devices.

Hi all, thanks for your help. Indeed I ended up manually managing CUDA_VISIBLE_DEVICES. Did not know about RAY_EXPERIMENTAL_NOSET_CUDA_VISIBLE_DEVICES, this could have helped.