Simple save() and load() interface for ray checkpoint

Pytorch offers a very simple interface and torch.load() to interact with checkpoints without requiring any knowledge about config or env. This is very useful when the checkpoint was saved by some legacy env/config that are no longer accessible. It appears that the ray checkpoints do have configs saved but when trying to use the .restore() method ray still requires to explicitly input env/config as arguments. Is it possible for ray to provide a simpler interface as what torch does?

Thanks for the feedback @zhh210!

Could you elaborate more on what env and config you are talking about here?

I’m all for making the interface simpler- it would help if you could provide code snippets on what you’d like to do vs. how it’s currently being done with Ray Checkpoints to paint this picture more clearly.

Thanks @amogkam. The interface has evolved a lot and I was referring to this legacy issue on github. I just found the improvement is actually in the recent release in Ray 2.2.0

from ray.rllib.algorithms.algorithm import Algorithm
algo = Algorithm.from_checkpoint(checkpoint_path)

which is easier than the older Ray version that needs specifying explicitly the config and env:

from ray.rllib.algorithms.ppo import PPO
algo = PPO(config=config, env=env_class)

Ah got it, this is specially referring to RLlib checkpoints.

Glad that you find the new interface easier!