Is the last reported metric used for tuning?

I’m quite confused that in all the examples of tuning, we report a metric many times such as

for step in range(start, 100):
do some thing…
tune.report(
mean_accuracy=accuracy,
)

Does the hyper-parameter optimizer simply use the last reported value? Or something different?

In my experience for the case of tuning of neural networks during training with an early stopping mechanism, the latest reported metric before early stopping is used.
The metrics reported in the meantime are used for the eventual schedulers like ASHA for choosing the eventual interruption of the training if it performs below the population median.

1 Like

Yes, this is correct.

Each call to tune.run() will produce a result, and each result is processed by the search algorithm, the scheduler, and callbacks (e.g. loggers). Schedulers can at any point in time decide to stop a trial (e.g. for early stopping like in ASHA), otherwise it will just continue training.

Generally after each result a decision about the fate of the trial we be made. Most of the time this is to just continue running.

1 Like