How to use gym environment that require an individual python process with RLLIB?

Hi, I am currently trying to use gym environments from RLBench (GitHub - stepjam/RLBench: A large-scale benchmark and learning environment.) whose environments use PyRep (GitHub - stepjam/PyRep: A toolkit for robot learning research.). Unfortunately, each PyRep instance needs its own process, so each env needs its own process. It seems that the RLlib creates multiple copies of my environment in a single process. I was wondering if there any way to wrap gym environments with these sorts constraints to be compatible with RLlib. Thanks.

Thanks for asking this question @Daniel_Lawson !
You could take a look at either our ExternalEnv API (RLlib Environments — Ray v1.1.0) or at using the “remote_worker_envs=True” setting (which will create a separate process for each vectorized env copy on the rollout workers).