During DQN training, it is possible to configure a certain number of workers to fill in the replay_buffer until reaching learning_starts, and after that decrease the number of workers?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Lowering the number of episodes per training iteration during tune.run | 2 | 386 | May 12, 2021 | |
DQN (and maybe other algo) should take into account the "num_envs_per_worker" config when computing the round robin native_ratio used to determined the number of steps to use for training | 3 | 478 | April 21, 2021 | |
Understanding the Stopping Process for ray.rllib.agents.dqn.DQNTrainer.train() | 4 | 595 | May 26, 2021 | |
Training steps for DQN | 3 | 122 | April 18, 2024 | |
DQNTrainer evaluate() doesn't perform any episode | 1 | 501 | March 16, 2022 |