[Tune] ray tune with each trial running tf.distribute.experimental.MultiWorkerMirroredStrategy()

We want to use ray tune with each trial running tf.distribute.experimental.MultiWorkerMirroredStrategy()

We are using ray 1.12

I noticed that there are two ways to do this:

  1. Use Trainer and then trainer.to_tune_trainable and pass it to tune.run()
  1. Use DistributedTrainableCreator( and pass it to tune.run()
    ray/tf_distributed_keras_example.py at ray-1.12.0 · ray-project/ray · GitHub

Which one do you recommend?
Will DistributedTrainableCreator be deprecated in the future say in ray 2.0?

Both will be deprecated in 2.0.

The 2.0 usage will look something like: Ray AI Runtime (AIR) — Ray 2.0.0

For now, the Trainer workflow will probably be the closest thing.