How severe does this issue affect your experience of using Ray?
- Medium: It contributes to significant difficulty to complete my task, but I can work around it.
I am trying to understand the difference here. I have a single env that I’m trying to parallelize. Setting .env_runners(num_env_runners=xx) seems to do the same thing as .rollouts(num_rollout_workers=xx)
It seems like the docs have nothing on num_rollout_workers either. Getting Started with RLlib — Ray 2.32.0 is empty. Seems like .env_runners also have a num_rollout_workers kwarg which is also undocumented ( ray.rllib.algorithms.algorithm_config.AlgorithmConfig.env_runners — Ray 2.32.0 ) . So this is all a bit confusing to me.
The workers
and rollout_workers
concept has been changed in new API stack, e.g. PR [RLlib] Cleanup, rename, clarify: Algorithm.workers/evaluation_workers, local_worker(), etc.. by sven1977 · Pull Request #46726 · ray-project/ray · GitHub of today cleaned up multiple old mentionings in the code and docs. If you are using the new API stack, always use env_runners
.
Thank you for the response! I believe I’m using the old API following the current docs in general. The PR does not exactly clarify my question (at least based on what I know about the library). I am just creating an algorithm object and configuring from there. Should I just not use .rollouts as it is not documented to begin with? is .env_runners(num_env_runners=xx) the same thing?
If you are using new stack, PPOConfig().env_runners()
should be used, yes.