Lowering the number of episodes per training iteration during tune.run

Hi Everyone,

I have a custom environment that takes about 40s to complete an episode, about 1.5G of RAM per worker. Using DQN, the first training iteration took over 170 episodes, so that took quite a while to complete. I am looking at how to reduce the number of episodes used for one training iteration. I made some edits to the hyperparameters but still haven’t seen number of episodes reduced:

"batch_mode": "complete_episodes",
'dueling': False,
'double_q': True,
'gamma': 1.0,
'n_step': 5,  
'lr': 1e-5,
'rollout_fragment_length': 1,

Would really appreciate if you can shed some light!

Thanks!

train_batch_size
20char

2 Likes

Increase batch_size would work.
Another way is to search for a better learning rate. Start with a smaller number of episodes, then pick the one that performed well and run a long training.

config = {
# etc
'lr' = tune.grid_search(["1e-5, 1e-4, 1e-3"])
}

tune.run(
  #etc
  config=config
)