Only use Ray to vectorize environment

I want to vectorize my env using ray, ideally getting something like AsyncVecEnv from OpenAi’s gym but using ray under the hood rather than python’s multiprocessing
Is it possible to do so?
Thanks in advance

Hey @ingambe ,

There are two ways to vectorize your custom (e.g. gym) envs using RLlib:

  • serial: This is the default. RLlib will create n sub-envs (all instances of your custom env) and step through these in sequence, then batching the resulting next observations for the next action computing forward pass.
  • parallel: Set remote_worker_envs=True in your config. This will create n of your custom (gym) env instances and wrap each to become a @ray.remote actor. Stepping through the n sub-envs is then done in parallel.

Thank you @sven1977
Is there a possibility to use this vectorized environment outside of RLLib?
I would like to write my own custom algorithm and use the vectorized environment for better performance (the MPI version is really bad performance-wise)

1 Like

You could take a look at how RemoteVectorEnv in RLlib is being “stepped”. It’s basically just a ray.remote call on the individual envs’ reset/step methods and then collecting the results via ray.get.

The code is in here:
ray.rllib.env.remote_vector_env.py::RemoteVectorEnv::poll

It’s a little different from the gym API, as we support async polling of our envs.

Thanks a lot @sven1977 !
That’s exactly what I needed :grinning: