RaySGD fails to find GPUs

I’m using RaySGD on an EC2 instance that has 4 T4 GPUs.
single-GPU training runs fine on one GPU in PyTorch. When I try using RaySGD to use the 4 GPUs, I get:

/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/cuda/amp/grad_scaler.py:116: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.

Why Ray doesn’t see my 4 GPUs?

Hey @Lacruche do you mind sharing a longer stack trace and a code snippet? Thanks!