Search algorithm and concurrency

Hello, I’m new to automl and I am following the tutorial Running Tune experiments with Optuna — Ray 2.0.1

It runs the following snippet:

algo = OptunaSearch()
algo = ConcurrencyLimiter(algo, max_concurrent=4)
num_samples = 1000

tuner = tune.Tuner(
    objective,
    tune_config=tune.TuneConfig(
        metric="mean_loss",
        mode="min",
        search_alg=algo,
        num_samples=num_samples,
    ),
    param_space=search_space,
)
results = tuner.fit()

Am I correct in thinking that this will essentially run 250 rounds of hyperparameter selection, where the first round will be randomly sampled, and subsequent rounds will use results from all previous rounds with the search algo? If so, if max_concurrent is equal to num_samples, will it be equivalent to running random search?

Hey @tk0802kim, thanks for the question.

I wouldn’t necessarily think of it as 250 rounds. 4 trials will be run concurrently to start off with random sampling, but as soon as one trial has finished, the next trial will begin using the search algorithm, even if the other 3 trials have not finished yet.

Note that OptunaSearcher does accept a points_to_evaluate arg to provide the initial hyperparameter configurations if you do not want them to be randomly sampled.

If so, if max_concurrent is equal to num_samples, will it be equivalent to running random search?

If you have enough resources to run all the trials concurrently, then yes this is correct. But the actual concurrency is min(max_concurrent, available_resources/resources_per_trial), so depending on how much resources you have, it might not be possible to run max_concurrent trial all at once.

1 Like