Training ray.rllib algorithm with vectorized environments

I have created a custom vectorized environment using static method as mentioned in and the question is how to use the return of that static method to train an algorithm. Is there any option to pass our own vec env to algorithms in ray.rllib? If this is not used to train any algorithm then what is this static method used for?

If your environment is a gym environment, then you actually shouldn’t need to manually need to vectorize your gym environment. RLlib experiments automatically vectorize the gym environment that is passed.

To see an example of using a custom gym environment with our launchers, see this example: