PPO example cannot use GPU

I want to run PPO example on GPU. Here is my code for training.

from ray.rllib.agents.ppo import PPOTrainer
import ray

ray.init()

trainer = PPOTrainer(config={"env": "CartPole-v0", 
                            "framework": "torch",
                            "num_gpus": 1,
                            "num_workers": 4})   
trainer.train()

However, It returns errors like

/ray/rllib/policy/torch_policy.py", line 155, in __init__
    self.device = self.devices[0]
IndexError: list index out of range

I install Pytorch with GPU support. and torach.cuda_is_available() == True.
The error seems like that ray could not find the GPU devices even though Pytorch can.

Is there any solution to this?

Thanks!

@daniel What does ray.get_gpu_ids() return?

when I put this line after the ray.init() it returns me an empty list.
Is there any way to let the ray find the GPU?

This is probably a Ray core issue in that case. For now, I think manually setting GPU ids here: ray/torch_policy.py at master · ray-project/ray · GitHub should work for the time being.

is there any other solution to solve this via some function/API parameter when I am building my own application?