Restoring the best model without access to the Analysis object

Hi there,

I wanna understand how to restore agents from checkpoints, I’m training with:

analysis = ray.tune()

With this I can restore the best agent this way:

checkpoints = analysis.get_trial_checkpoints_paths(trial=analysis.get_best_trial('episode_reward_mean', mode='max'), metric='episode_reward_mean')

checkpoint_path = checkpoints[0][0]

agent = PPOTrainer(config=my_config, env=my_env)


This is great if the whole process occurs in 1 go, but if something happens to the python session the analysis variable is lost and with it all the trials and checkpoints.

So I understood that I can just provide an absolute path to the checkpoint_path variable instead, it’s not the most convenient but I am able to restore agents this way by giving a path like so:


But if there’s like 100 checkpoints, how can I find the best agent? there would be tons of directories to go through manually.

There must be a better way to do this without the analysis object.

Thanks for any advice!