Rllib use checkpoint to run my simulation

Hi, I want to know that when i get a checkpoint, then how can i use it in my simulation?
and ,I want to know that can i use pytorch:model.load() from checkpoint files?
Thank you very much~

There are several methods to restore from checkpoint, but they never involve model.load() on the surface, only under the hood.

my_restored_policy = Policy.from_checkpoint("/tmp/my_policy_checkpoint")

# Use the restored policy for serving actions.
obs = np.array([1, 2, 3]) 
action = my_restored_policy.compute_single_action(obs)

You can do something like this. Or like this:

tuner = tune.Tuner(
        [...]
    )
    results = tuner.fit()
    checkpoint = results.get_best_result().checkpoint
    algo = Algorithm.from_checkpoint(checkpoint)