Can 'tune.run' just run a function on multiple GPUs with different configs without "trials"?

Hello all! As the title suggests, I was just curious if it’s possible to use tune.run for just running any function with a different set of configs, auto-queued on fractions of GPUs? Like without the hyperparameter search. Say for example I have a function train and I want to run it with different configs in parallel on subdivisions of GPU memory, like say once with config={model_type: 0}, another time with config={model_type: 1}, and maybe one more time with config={model_type: 2} . So there’s no hyperparameter space to search. And I don’t want tune.run to pick out a “best” run in this case, since I want to see the results / checkpoints for all three configs.

I like the functionality of how all the results are tabulated and evenly split over GPUs, just want to do that without picking a ‘best’ run per se. :smiley: Consequently, I wouldn’t want this to auto-terminate any trials, since I’m intentionally wanting to try out all the different configs; I guess this is means I shouldn’t use any Schedulers? Is it sufficient to just set scheduler=None to ensure that no experiment early-terminates?

Hi @actuallyaswin, yes, that’s possible! Ray Tune does not care about the kind of function you’re running as long as it adheres to the API.

This means a few things:

  1. You should probably use grid search and constant parameters instead of defining a random search space
  2. For scheduling you’ll use the FIFOScheduler which is the default and just executes trials one after another. This is indeed what you get when you pass scheduler=None (or nothing at all).
  3. Your trials should still return a metric so that Ray Tune doesn’t complain. This can be a constant value. If you’re using grid search and FIFO scheduling, the metric will just be recorded but not influence training/scheduling in any way.

Something like this should work:

from ray import tune

def train(config):
    # ...
    tune.report(metric=0)

config = {
    "model_type": tune.grid_search([0, 1, 2]),  # you could also pass strings here
    "constant_value": 5.0
}

tune.run(
    train,
    config=config,
    resources_per_trial={"gpu": 0.25})
1 Like