I’m trying to save a custom NP array at the end of each episode. I created a callback like:
def on_episode_end(self, *, worker, base_env, policies, episode, env_index,
**kwargs):
envs = base_env.get_unwrapped()
episode.custom_metrics["rollout_arrays"] = [
env.gat_arr() for env in envs
]
and then in a custom logger:
def log_trial_result(self, iteration, trial, result):
# save rollout_arrays from result
But the data in custom_metrics
seems to be processed and not passed in directly to the logger. What would be the best way to handle this?
Hi, not sure I understand correctly. You want to save an entire array as single metric? What do you want to do with the logged array in the end?
I always saved a single scalar value for each metric so that it can be displayed properly in TensorBoard. In case of dicts or arrays, I simply created separate metrics for the different dict keys (eg, measurements for different users).
Here’s an example of what I implemented for logging a custom metric: DeepCoMP/callbacks.py at master · CN-UPB/DeepCoMP · GitHub
Maybe it’s helpful.
Thanks, but that’s not really what I’m trying to do.
Actually, you can save entire arrays and TB will put it under the histograms if you use the episode.hist_data
in the callback.
But in my case I want to do some post processing and visualization, so I need the arrays saved separately.
After some digging I realized there’s the episode.media
property (which was introduced recently) that gets passed to the logger callback under episode_media
, i.e.:
def on_episode_end(self, *, worker, base_env, policies, episode, env_index,
**kwargs):
envs = base_env.get_unwrapped()
episode.media["rollout_arrays"] = [
env.gat_arr() for env in envs
]
and then in the logger CB:
def log_trial_result(self, iteration, trial, result):
arrays = result['episode_media']['rollout_arrays']
# save, render, whatevs with arrays
2 Likes