Tune.run() vs Tuner.fit()

Recently saw the readme change from using tune.run() to use Tuner.fit()

Will tune.run() API be removed in the future?
How stable is this new Tuner.fit() API?
Which API do you recommend?

@raytune_kuberay_user
Thanks for asking this question.
Tuner is the recommended way of running hpo workload on Ray AIR. The migration is needed for various Ray components (Ray Tune/ Ray Train etc) in Ray AIR to have consistent feel and APIs.
If you are looking to expand your use case beyond just tuning, Tuner would be a better API to use. Tuner is currently at the beta stage (with the new ray 2.0.0 release). In the long run, tune.run will be deprecated. If you see any gap with the tune.run API, please file a bug and let Ray Team know. Thank you!!

1 Like

@xwjiang2010 Thanks for the reply.

In terms of functionality, are there anything that tune.run() supports and Tuner doesn’t support? (Also tuner supports but tune.run() doesn’t support?)

Our use cases are to use ray.tune + tensorflow + horovod/tf.distributed strategy. In the future, we might want to try elastic horovod with ray

There are minor API differences - e.g. the export_models argument has not been carried over to Tuner() as it will be deprecated (instead you should just export the models within the checkpoints).

At the moment, Tuner() uses tune.run internally, but this may change in the future.

We believe that all use cases should be covered in the Tuner() API, so if any functionality is missing, please let us know and we’ll add it!

As for benefits, the Tuner() API supports e.g. better restoration, failure handling, and a neater output format (results grid instead of experiment analysis).

2 Likes

where to pass over checkpoint_freq in Tuner()? it works find in tune.run(checkpoint_freq=1) but can’t find where it can be passed to Tuner. Neither run_config nor checkpoint_config accepted checkpoint_freq.

use run_config.checkpoint_config.

1 Like

Hi! In tune.run(…) it is possible to use

tune.run(..., config={"evaluation_config": {..}}

to evaluate your agent after training.
What is the proper way to implement this within tune.Tuner() API?

@a28091 this is an rllib-specific setting, but it should work the same in tune.run and Tuner.fit() - i.e. just pass it as part of your param_space:

Tuner(
    ...,
    param_space={"evaluation_config": ...}
)