Sequential PBT: fewer GPUs than trials

I wonder if you can use PBT sequentially. In the documentation, it is written that

If the number of trials exceeds the cluster capacity, they will be time-multiplexed as to balance training progress across the population. To run multiple trials, use tune.run(num_samples=).

but I am not sure what that means. Does that mean that I should have no problems if I want to use Tune with PBT on one GPU and 8 trials? I am not exactly sure how that would work, though, considering that the goal of PBT is, partially, to be able to stop runs as soon as possible. Can you give a bit more information whether my use case is possible and how that works?

Hey @BramVanroy, yes even if you have 1 GPU with 8 trials, PBT will still work. One iteration will be run for 1 trial, and then another iteration for the next trial, and so on. Eventually, low performing trials will be mutated, but it will take longer time to get there if you only have 1 GPU.

So my recommendation is to always try to parallelize/use more resources as much as possible :slightly_smiling_face: