I wonder what is the best configuration for training with server with no GPU, only 64 CPU, 256 GB RAM.
Now I test configuration below but maybe there are better settings:
config['num_gpus'] = 0
config['num_workers'] = 53
config['evaluation_num_workers'] = 10
How this configuration depend on environment complexity?
Hey @Peter_Pirog , greeat question. Yeah, these make a lot of sense.
If the environment is very complex and take a long time to step (which is normally not the case), you may even want to set the vectorized sub-envs to be parallelized via:
remote_worker_envs=True
.
Also, if you don’t need that much evaluation going on, you could lower your number of eval workers.
Also, you may want to try running evaluation and training in parallel via evaluation_parallel_to_training=True
.