During DQN training, it is possible to configure a certain number of workers to fill in the replay_buffer until reaching learning_starts, and after that decrease the number of workers?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Lowering the number of episodes per training iteration during tune.run | 2 | 387 | May 12, 2021 | |
DQN (and maybe other algo) should take into account the "num_envs_per_worker" config when computing the round robin native_ratio used to determined the number of steps to use for training | 3 | 478 | April 21, 2021 | |
Understanding the Stopping Process for ray.rllib.agents.dqn.DQNTrainer.train() | 4 | 597 | May 26, 2021 | |
DQNTrainer evaluate() doesn't perform any episode | 1 | 502 | March 16, 2022 | |
[RLlib] varying the number of agents in multi-agent environments | 3 | 435 | June 11, 2021 |