Evaluation with trainer.restore() without cluttering ray_results

Hi,
I have the problem that whenever I evaluate trained models I load them with trainer.restore() and then user trainer.compute_action(). However everytime I do this a (for me useless) directory is created in ray_results, cluttering the directory. Is there some way to prevent these directories and only create result directories when I’m actually training?
Best regards

I don’t think we don’t support this right now. There is a logger_config key in the Trainer config, but it’s not used anywhere in RLlib. You could create your Trainer object using the logger_creator arg which holds a callable that generates a logger object (in there using your own choice for an output dir):
e.g.

from ray.tune.logger import JSONLogger
trainer = PPOTrainer(config=config, env="CartPole-v0", logger_creator=lambda config: JsonLogger(config, "[your output dir]"))
1 Like