Hi all,
I just had a quick question regarding the unity 3d external env implementation - after having it trained use RLlib, will I be able to save a .onnx file to use inside Unity later as the brain? Or is the .onnx only generated if trained strictly using ml-agents+Unity?
Thanks in advance,
Denys A.
Hey @Denys_Ashikhin , RLlib does not store ONNX files natively, but you can maybe extract the (torch/keras) model after training and store it manually as ONNX.
You can get to the model via:
torch: Trainer.get_policy().model
tf: Trainer.get_policy().model.base_model # this is if your TFModelV2 has a base_model property that holds the actually underlying keras model (Note that our TFModelV2's are not keras models themselves)
If you use tune for your training, recover from the last stored checkpoint and do the above:
tune.run(... , checkpoint_at_end=True)
# Recover a new trainer from the trained one.
new_trainer = PPOTrainer([same config as used in tune above])
new_trainer.restore([last checkpoint file created by the above tune run])
# ... get the model and store as ONNX as described above.