Hello,
I am in the process of developing a project where I want to set Unity3D as the external simulation environment and use RLlib for training a RL algorithm, following a client-server approach.
I have been able to export the tf model and convert it to .onnx format, but still I cannot manage to use it as the brain inside Unity in order to run inference with ML-agents. I suspect that the problem is that ML-agents expects a specific naming convention for input and output tensor names. Is there a way to set the model layer names inside RLlib with the desired conventions? Any other possible suggestions?
Thank you in advance,
Matthew