Hi, I’m working on a custom_eval function for my environment. I hope to evaluate my agents with a few parallel workers, and get some stats from the envs, and finally log them to some local files. I have the evaluation parts done according to this example, but I’m not sure how to approach the part of getting the stats? Thanks.
Here is the function, if this helps you:
def custom_eval_func(trainer, eval_workers): workers = eval_workers.remote_workers() e_config = trainer.config["evaluation_config"] n_agents = e_config["n_agents"] n_rounds = trainer.config["evaluation_num_episodes"] tested_agent = 0 while tested_agent < n_agents: workers_used = 0 for worker in workers: # Assign an agent to be evaluated by the worker worker.foreach_env.remote( lambda env: env.env.reset_as_test(tested_agent)) workers_used += 1 tested_agent += 1 # Wrap around if we have spare workers if tested_agent == n_agents: if n_rounds > 1: tested_agent = 0 n_rounds -= 1 else: break # Send get data signal ray.get([worker.sample.remote() for worker in workers[:workers_used]]) if tested_agent == n_agents and n_rounds>1: tested_agent = 0 n_rounds -= 1 # TODO: Store the attention matrix visualization episodes, _ = collect_episodes( remote_workers=eval_workers.remote_workers(), timeout_seconds=99999) metrics = summarize_episodes(episodes) return metrics