Hi again, I’m working with RLlib in a system that has two available Nvidia GPUs. Is there any way to choose with one to use for model training? I mean, if I want to compare individual performance with both GPUs I want to train using one of them fist and the other one later and compare times in both cases. The only way that I found was to change tensorflow visible devices with tf.config.set_visible_devices(), but I wanted to know if this is the correct way to do that.
There is no built in way in RLlib or tune right now to hand pick the actual GPU to use (on a multi-GPU machine). Yes, you could try to hack this with your above solution. Make sure, though, to print out in your model/policy the device name to make 100% sure it’s really placing the model on the one you want. I have never tried manually tinkering with CUDA_VISIBLE_DEVICES. Let us know, how it goes!