How severe does this issue affect your experience of using Ray?
High: It blocks me to complete my task.
Hi everyone.
Using ray.tune, I am able to train my model and saving the states as training goes. However, I still haven’t found a way to reuse my trained policy on another setting. (by reusing the neural network in a different experiment).
@Finebouche I had the same questions a few months ago. I found it odd that Ray Tune doesn’t provide a straightforward way to start a new Tune job from an existing Tune checkpoint. You have to go through some gyrations. I eventually learned how to use a callback to restore just the policy weights from a checkpoint and proceed from there. Of course, that starts over with no optimizer params & other training loop counters, etc. But the raw NN itself can be moved forward. For details, see Tune as part of curriculum training - #14 by gjoliver. It is a bit of a circuitous discussion for a while, but about half way down you see talk of the callbacks.
Can you give me some advises how to restore optimizer parameters to continue training from the checkpoint? And did you found out where module_state.pt and default_policy_default_optimizer.pt files are read ?
@starkj Thank you for your reply. In Ray 2.9.0 I have found out this new functionality. It may help with optimizer parameters restore: Learner (Alpha) — Ray 2.9.0