How to export policy including preprocessors

I am trying to export a trained Pytorch policy and then interface with it like I would with a Trainer, without doing any manual scaling and assigning of the input values.

agent.get_policy().export_model("path")
model = torch.load("path/model.pt")

However, when I load the exported model with Pytorch I get an object of the class “FullyConnectedNetwork”, which is wrapping the Pytorch model.

Its forward() function expects a flattened and scaled observation in the form of a Tensor. How am I supposed to pass observations to this interface? Doing the scaling and assigning of positions in the Tensor to the original observations manually is error prone and shouldn’t be necessary since it is done in rllib. What is the best way to do this? Is there some functionality in the wrapper that I missed out on, or are we as developers supposed to reverse engineer the preprocessing done by rllib and apply it ourselves? I don’t want to use Trainer.compute_single_action(), since I want the exported model to be independent of rllib and need it as a PyTorch model for compatibility with other libraries.

Help would be much appreciated!

How severe does this issue affect your experience of using Ray?

  • Medium: It contributes to significant difficulty to complete my task, but I can work around it.