Hi! I am curious what is the suggested/intended workflow for a full cycle of: training with Tune/RLlib → storing artifacts/models in MLflow → using the trained model in a custom script (with or without Serve). My question is mainly about what should be stored in MLflow? A PyTorch model extracted from a policy (e.g. PPO)? Or a whole trainer (PPOTrainer) as some kind of a Python function? I’d like to have an MLflow Model available from my experiment, and an option to instantiate PPOTrainer from that model.
One possible workflow would be (this is from one of our industry users):
- Pre-process historic data (offline RL).
- Train RLlib Trainer for some time.
- Trainer.save() or Trainer.get_policy().export_model() → MLFlow?
- Repeat 2) and 3) (e.g. tune hyperparam tuning)
- Evaluate stored models (pick a good one to continue training or serving).
- Trainer.restore() (we currently have no
Trainer.get_policy().import_model()
method). - Serve model (e.g. see ray/rllib/examples/serve_and_rllib.py) via Ray Serve.
Yeah, I think only the trained Trainer should be stored in MLFlow.
By that you mean a directory with checkpoints and config files? So the Trainer.restore() can be used to extract the policy?
Maybe @amogkam or @architkulkarni would have some perspective here too?
I know less about the RLlib side of things so I’m not sure if this will be relevant to the original question, but here’s a reference on how to use Ray Serve with MLflow models: Serving Machine Learning Models — Ray v2.0.0.dev0
The following blog post about using Tune and Serve with MLflow might also be helpful: Anyscale - Ray & MLflow: Taking Distributed Machine Learning Applications to Production
1 Like