Hello all! As the title suggests, I was just curious if it’s possible to use tune.run for just running any function with a different set of configs, auto-queued on fractions of GPUs? Like without the hyperparameter search. Say for example I have a function train and I want to run it with different configs in parallel on subdivisions of GPU memory, like say once with config={model_type: 0}, another time with config={model_type: 1}, and maybe one more time with config={model_type: 2} . So there’s no hyperparameter space to search. And I don’t want tune.run to pick out a “best” run in this case, since I want to see the results / checkpoints for all three configs.
I like the functionality of how all the results are tabulated and evenly split over GPUs, just want to do that without picking a ‘best’ run per se.
Consequently, I wouldn’t want this to auto-terminate any trials, since I’m intentionally wanting to try out all the different configs; I guess this is means I shouldn’t use any Schedulers? Is it sufficient to just set scheduler=None to ensure that no experiment early-terminates?