Hello guys, I have a few questions about checkpoint in RLlib,
Question 1, how do I save the model parameters if I use ray.tune? (The RLLib examples always saves after training. I wanted to save it during training)
https://docs.ray.io/en/latest/rllib/rllib-saving-and-loading-algos-and-policies.html
I found that tune can also save checkpoints, but I don’t know if it can save model parameters.
https://docs.ray.io/en/latest/tune/tutorials/tune-trial-checkpoints.html
Question 2, Does ray save model parameters automatically?I found some checkpoint files in ray_results, but no model parameters (I didn’t save checkpoint).
Question 3, How do I change the saved model parameters? For example, I trained a model in RL that had two parts, an environment encoder and an action head, and I wanted to use the environment encoder for another task (specifically, keep the encoder not change and replace the action head with an action head for another task).I don’t know how to implement this idea in ray.
Thanks for your reply!