I would like to train my pytorch model on machine with 2 GPUs. I have TorchTrainer and TrainingOperator and in TorchTrainer I set num_workers=2 and use_gpu=True but according to documentation it will use only one GPU for all workers. (Documentation says: use_gpu (bool): Sets resource allocation for workers to 1 GPU
.) Will it use only one GPU at a time for both workers ? If so how can I make it to run concurrently on both GPUs at a time (each worker at one particular GPU) ?
Thanks in advance.