I’m training some models for a custom MuJoCo env that closely matches a robot I’ve built irl. I’m now looking at how to use that model outside of the training infrastructure(ideally).
I see that there’s compute_actions on the ray.rllib.agents.trainer.Trainer object, but that would require setting up the trainer object with all the config that goes with it.
I think that you can do without needing the entire config that was used for training your SAC agent.
In this example, we uncheckpoint a trained experiment, but only using the environment that we originally trained the agent on:
It seems this example looks like it doesn’t need config because its mostly using defaults. I managed to get it working for my env and training checkpoint but I had to create a config to change the framework and model parameters before it was loaded successfully. I guess that makes sense but it would be nice if I didn’t have to worry about exporting model metadata separate from the files that somewhere inside probably contain the same info.