How to speedup RLLIB training

Hi I’m using DRL to train in my custom environment.
The environment is computing-intensive (e.g. state, reward calculation…)
No image operations

Now it takes about 20s to finish one iteration which has several hundreds of episodes.
I wonder how I can speed up the DRL training if I have a cluster. Any other things that I can do?

Hey @Ethan , you can try scaling your experiments via the config settings:
num_workers (parallelizes env data collection), num_envs_per_worker (same), and num_gpus (data-parallelizes learning updates).