Okay, I got it working. Just a small note to add. I think for the following line, it works for tensorflow not for pytorch:
Therefore, in order to make it work, I have to add an input parameter
force_cpu in my
a = filelock.FileLock("/tmp/gpu.lock")
# Makes it so that 1 trial will use the GPU at once.
result = training_run(..., force_cpu=False)
# If the lock is acquired, you can just use CPU, and disable GPU access.
result = training_run(..., force_cpu=True)
# Release the lock after training is done.
def training_run(..., force_cpu=False):
# gpu usage
if not force_cpu:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device = "cpu"
It is now running super fast, thanks again to @rliaw