Some questions about checkpoint in RLLib

Hello guys, I have a few questions about checkpoint in RLlib,

Question 1, how do I save the model parameters if I use ray.tune? (The RLLib examples always saves after training. I wanted to save it during training)
https://docs.ray.io/en/latest/rllib/rllib-saving-and-loading-algos-and-policies.html
I found that tune can also save checkpoints, but I don’t know if it can save model parameters.
https://docs.ray.io/en/latest/tune/tutorials/tune-trial-checkpoints.html

Question 2, Does ray save model parameters automatically?I found some checkpoint files in ray_results, but no model parameters (I didn’t save checkpoint).

Question 3, How do I change the saved model parameters? For example, I trained a model in RL that had two parts, an environment encoder and an action head, and I wanted to use the environment encoder for another task (specifically, keep the encoder not change and replace the action head with an action head for another task).I don’t know how to implement this idea in ray.
Thanks for your reply!

Yes, running your RLlib algorithms with Tune e.g. tuner = tune.Tuner(PPO, config=...); tuner.fit() will automatically save the model params as part of the checkpoints. You can configure checkpointing using the tune docs: ray.tune.Tuner — Ray 2.3.0

For Q3, I would suggest creating a custom model with .encoder and .action_head subnetworks for the first model. Save the checkpoint during training. Use policy.restore to restore the model weights from the checkpoint. Then set policy.model.model.action_head = newActionHead()

Hope this helps.