I have a platform of C++. I want my inferences (predictions) to be implemented in C++. Is there a way to convert the RLLIB algorithms to C++. Or is there a way to do to inferences of RLLIB algorithms in C++
The short answer is: I am afraid not.
The long answer is: The RLlib algorithms all follow scientific papers. You may find some implementations of them in C++. These papers have been implemented in different reinforcement learning frameworks but generally at some point use Tensorflow or PyTorch.
For example, open baselines describes itself as a collection of high-performance reference implementations and was coded in python, too. For 99.9% of users, such implementations are perfect. So for almost everyone, there is nothing to gain from calling Tensorflow or PyTorch from C++. For inference during training, you are going to use Tensorflow or Pytorch models at some point. And even if you are in production and use something like Ray Serve, there will be no need for C++.
To sum it up: C++ is part of RLlib, too. But only where it helps and does not deter data scientists: In Ray itself. The algorithms themselves are formulated in Python.
I hope this helps! Cheers
Thank you @arturn. So I guess I just implement RLLIB algorithms in python as it is using clusters and parallel processing it would be fast enough that I don’t need to implement in C++ as my main concern is the speed. And then I can just feed the inferences(output) to my C++ platform. Am I right in this?
If you only have C++ interfaces to your platform, then the most straight forward way would be to wrap them with Python and call from RLLib itself. Do you have some sort of robotics library? Are these C++ functions meant to be called by RLlib agents?
I had a possibly similar situation with an application that I could interface with through a websocket from an RLlib agent.