Slow down environment spawning

Hello. I am trying to use the API to train a RLlib agent with concurrent trials. The problem is that my custom environment (which is already in an openAI gym wrapper) will crash if two instances are created within quick succession. Is there a way I can stagger the initialization of my environments, or externalize the environments to prevent crash caused when multiple trials start at the same time? Thanks for the help!

1 Like

Hey @austinh123123 , so are you using > 1 vectorized sub-environments per worker (“num_envs_per_worker” setting)?

As we don’t support a on_environment_created custom callback (we should add this!), there is one workaround to this:
Sub-class the Trainer you are using and only override one method: Trainer.validate_env. In there, do a sleep(n) or whatever you need to do after(!) each sub-environment has been created.