How severe does this issue affect your experience of using Ray?
- None: Just asking a question out of curiosity
I am wanting to make a system that has a trainer (PPO) fully set up locally on a server, then the client would send requests for predictions from an environment. The server would then create an actor (or process?) that creates a new ID for the session and essentially
trains on the incoming requests.
The idea here is to create a game and have clients make HTTP requests for predictions (the latency does not affect the gameplay). I am aware of Unity ML agents, but the clients may be connected/disconnected at will and from environments that don’t have python installed for example.
Is this possible? If so, can anyone point me in the right direction?
In particular, there is the policy_client + policy_server. Can I have a request be sent to the policy_client whenever I need a new game (from a client) which would use the
new method to startup a new game that is running parallel to the other games? policy_client then forwards all the requests as normal to police_server.
So instead of policy_client running the game, it is getting requests with all the information from different devices, and acting as a mediary for policy_server