From my point of view it could be solved by MLflow integration with rllib checkpointing and that’s why my main question is about integrating these two frameworks.
However, if there is already some tool that would fit into my requirements then I can switch to it.
We don’t offer this integration.
By default, checkpoints are written to disk.
But you can use ray.tune to specify network storage for checkpoints.
Have a look at this!
Tune/RLlib don’t manage models and checkpoints as extensive out of the box.
For example, there is no website like you mentioned.
Ultimately, this question is not specific to RLlib but also to other models that come out of the Ray universe.
Maybe @Yard1 has more to say about this.
I had done a workaround in which I add MLmodel file to the checkpoint directory. With that trick I can connect best checkpoints (stored as artifacts) with MLflow::ModelRegistry.
Although it would be great to have this functionality out-of-the-box (with additional parameters to compare similar environments), for now it is enough.