How to change default agent_timesteps_total in rllib_trainer.train()

Hi
Could some please tell me how to change default agent_timesteps_total in rllib_trainer.train(). Defualt is 4000 steps. How can we change this. Thanks

Hi Arif,

agent_timesteps_total is a metric that shows you what it’s name suggests.
In the tune documentation you find mutliple possibilities to stop a training:

  • stop (dict | callable | Stopper ) – Stopping criteria. If dict, the keys may be any field in the return result of ‘train()’, whichever is reached first. If function, it must take (trial_id, result) as arguments and return a boolean (True if trial should be stopped, False otherwise). This can also be a subclass of ray.tune.Stopper , which allows users to implement custom experiment-wide stopping (i.e., stopping an entire Tune run based on some time constraint).

So in your case, you could call tune.run() like this:
tune.run(stop={“timesteps_total”: 4000}).

Hope this helps

Thank you a lot @arturn

1 Like

Hi @Arif_Jahangir,

There is also a key in the config called “timesteps_per_iteration” that controls how many new timesteps of experience are collected for each call to train(). For PPO the default is 4000 but you can adjust that if you want.

The intended usage in rllib is that the train function will be called many times in a loop. Either by you or automatically by tune. You can use the stopping criteria @arturn mentioned in combination with tune to determine when training should stop.

2 Likes