How to load resources on GPU inside environment?

I’m running a multi-agent simulation, and models that I’m currently training are loaded to and saved from the GPU. However, I try to load past sets of weights as agents in the environment, but torch.cuda.is_available() is False when called from the environment, even though during training I’m running models on the GPU.

Since I’m saving models on the GPU, I’m unable to load them into CPU memory inside the environment. Is there any reason why the environment might noto have access to the GPU, and if so, is there a way to change this?

You need to set num_gpus_per_worker in the config to a nonzero value.

1 Like

Hey @jarbus , there is also an example script that illustrates the usage of GPUs (even fractions of GPUs) on the workers. It’s located under:
ray/rllib/examples/partial_gpus.py