[RAY SGD] Train pytorch model on machine with 2 GPUs

I would like to train my pytorch model on machine with 2 GPUs. I have TorchTrainer and TrainingOperator and in TorchTrainer I set num_workers=2 and use_gpu=True but according to documentation it will use only one GPU for all workers. (Documentation says: use_gpu (bool): Sets resource allocation for workers to 1 GPU.) Will it use only one GPU at a time for both workers ? If so how can I make it to run concurrently on both GPUs at a time (each worker at one particular GPU) ?

Thanks in advance.

Good question! @rliaw do you know the answer to this?

Thanks for the ping! @Many98 if you set num_workers=2, use_gpu=True, this will give each of the workers 1 GPU (meaning 2 total GPUs).