Question about evaluation

How severe does this issue affect your experience of using Ray?

  • Low: It annoys or frustrates me for a moment.

I noticed that the Algorithm class provides an [evaluate()](https://docs.ray.io/en/latest/rllib/package_ref/doc/ray.rllib.algorithms.algorithm_config.AlgorithmConfig.evaluation.html#ray.rllib.algorithms.algorithm_config.AlgorithmConfig.evaluation) call function. My work flow is to perform training and then evaluation separately.

But the evaluate() function only works if I set the Algorithm’s evaluation config to have evaluation_interval=1 meaning that it runs automatically after each call to train(). I current have evaluation_parallel_to_training=False to prevent it from running concurrently with the train() command. But there does not seem to be any way to tell the algorithm to not perform evaluation until I call evaluate().

Is there a way to completely separate out the training from the evaluation processing?