Continue the tuning after changing the searching range

I have a tuning that has been running for a very long time. Then I realize a few hyperparameter’s search range need to be widened. Is it the right procedure just stop the tuning and change the search range and restore the tuning using algorithm.restore_from_dir(). Thanks in advance.

Hi @wxie2013, that’s a bit tricky, depending on how your search is going. Generally overwriting the search space is not supported.

Are you using random/grid search or a custom searcher?

Some searchers support passing already evaluated parameters (from the first run) to bootstrap the internal model.

For random search, you could try adding a new experiment configuration to the BasicVariantGenerator. If you provide more context we can have a think about what’s the best solution here.

Hi @kai , I’m using hyperopt for the search. Will that work in this case? Thanks

It does not work out of the box I’m afraid.

But you can work around that. The idea is to load the results from the previous experiment and initialize the searcher manually with it.

E.g. if your first run looks something like this:

from ray import air, tune
from ray.tune.search import ConcurrencyLimiter
from ray.tune.search.hyperopt import HyperOptSearch


def train(config):
    return (0.1 + config["width"] / 100) ** (-1) + config["height"] * 0.1 * 3


config = {"width": tune.uniform(10, 20), "height": tune.uniform(-80, 80)}


searcher = HyperOptSearch(metric="_metric", mode="max", n_initial_points=4)
searcher = ConcurrencyLimiter(searcher, max_concurrent=1)
tuner = tune.Tuner(
    train,
    param_space=config,
    run_config=air.RunConfig(name="hyperopt_first"),
    tune_config=tune.TuneConfig(search_alg=searcher, num_samples=10),
)
tuner.fit()

Then your second run could look e…g like this:

from ray import air, tune
from ray.tune.search.hyperopt import HyperOptSearch


def train(config):
    return (0.1 + config["width"] / 100) ** (-1) + config["height"] * 0.1 * 3


config = {"width": tune.uniform(0, 30), "height": tune.uniform(-100, 100)}


# Load results from first experiment
old_results = tune.ExperimentAnalysis("~/ray_results/hyperopt_first")

# Load trial configs and results
points_to_evaluate = []
trial_to_result = {}
for trial in old_results.trials:
    if trial.status != "TERMINATED":
        continue
    print("Loading", trial)
    trial_to_result[str(trial)] = trial.last_result
    points_to_evaluate.append(trial.config)


# Create searcher. Note that we pass the `config` here already
searcher = HyperOptSearch(
    space=config,
    metric="_metric",
    mode="max",
    n_initial_points=0,
    points_to_evaluate=points_to_evaluate,
)

# Loop through existing results, create trial and report to searcher.
# Note that these trials will not show up in the trial table in the next
# run!
for trial_name, result in trial_to_result.items():
    config = searcher.suggest(trial_name)
    searcher.on_trial_result(trial_name, result)


tuner = tune.Tuner(
    train,
    param_space=config,
    run_config=air.RunConfig(name="hyperopt_second"),
    tune_config=tune.TuneConfig(search_alg=searcher, num_samples=8),
)
tuner.fit()

1 Like