Agent.train() vs ray.tune.run

Are these two ways of training an agent equivalent in terms of number of steps and exploration strategies?

analysis = ray.tune.run(dqn.DQNTrainer, config=self.config, local_dir=self.save_dir, stop={"training_iteration": 100},
                                    checkpoint_at_end=True)
for n in range(100):
        result = agent.train()

Hi @carlorop ,

Yes, tune will onyl call Trainable.train(), which counts as one training_iteration and therefore will do this 100 times until it returns. The additional code in Trainable.train() does not change the inner workings of the algorithm but is mainly concerned with keeping track of metrics etc.