Custom simulator with as RLlib environment

I’m very new to Ray RLlib and have an issue with using a custom simulator my team made. We’re trying to integrate a custom Python-based simulator into Ray RLlib to do a single-agent DQN training. However, I’m uncertain about how to integrate the simulator into RLlib as an environment.

According to the image below from Ray documentation, it seems like I have two different options:

  1. Standard environment : according to the Carla simulator example, it seems like I can just simply use the gym.Env class API to wrap my custom simulator and register as an environment using ray.tune.registry.register_env function.
  2. External environment : however, the image below and RLlib documentation gave me more confusion since it’s suggesting that external simulators that can run independently outside the control of RLlib should be used via the ExternalEnv class.

If anyone can suggest what I should do, it will be very much appreciated! Thanks!

Hello, the short answer is that the second option can be used if your environments are very costly to step. If your simulator does a lot of computations or if your observations are some rendered images, the Rollout Worker would wait for the return and do nothing meanwhile. This is done automatically (or at least it was in my case) via the config parameters in the docs. Remember that the policy might be very large and not allow for multiple Rollout Workers. Multiple environments can then be used to speed up the sampling.

I think the image can be a bit clearer, the second row could instead of showing a one-to-one correspondence, show a one-to-many correspondence. Start of with the first option and get a feel for custom Gym environments. And then depending on your environment, see how you can get the best performance.

1 Like