Custom logging of agent behaviors

Hi, I’m working on a custom_eval function for my environment. I hope to evaluate my agents with a few parallel workers, and get some stats from the envs, and finally log them to some local files. I have the evaluation parts done according to this example, but I’m not sure how to approach the part of getting the stats? Thanks.

Here is the function, if this helps you:

def custom_eval_func(trainer, eval_workers):
    workers = eval_workers.remote_workers()
    e_config = trainer.config["evaluation_config"]
    n_agents = e_config["n_agents"]
    n_rounds = trainer.config["evaluation_num_episodes"]

    tested_agent = 0
    while tested_agent < n_agents:
        workers_used = 0
        for worker in workers:
            # Assign an agent to be evaluated by the worker
            worker.foreach_env.remote(
                lambda env: env.env.reset_as_test(tested_agent))
            workers_used += 1
            tested_agent += 1

            # Wrap around if we have spare workers
            if tested_agent == n_agents:
                if n_rounds > 1:
                    tested_agent = 0
                    n_rounds -= 1
                else:
                    break
        
        # Send get data signal
        ray.get([worker.sample.remote() for worker in workers[:workers_used]])

        if tested_agent == n_agents and n_rounds>1:
            tested_agent = 0
            n_rounds -= 1
    
    # TODO: Store the attention matrix visualization
    episodes, _ = collect_episodes(
        remote_workers=eval_workers.remote_workers(), timeout_seconds=99999)
    metrics = summarize_episodes(episodes)

    return metrics

Hi @Aceticia ,

Which stats are you referring to?
Calling summarize_episodes() and returning the metrics does what I believe you want to achieve.

Furthermore:

  1. What is reset_as_test?
  2. Is there extra functionality you need when evaluating? What is it you want that RLlib does not do automatically during evaluation?

Cheers

1 Like

Hi @Aceticia,

Following up on @arturn’s post, metrics is a dictionary. You can add more entries to it. I am not sure if you have to put them under a certain key but I would guess you do not. I would just try and then follow the errors if they appear :grin:.

Sorry, my question wasn’t clear enough. My question is: I have some stats stored in the environment that I want to retrieve but I’m not sure how to access the eval envs from here? Right now I’m logging them using on_episode_end callback and then summarizing the episodes. Is there a more direct method?

Hi @Aceticia,

I do not know what your final goal is, but I am logging each step in the environment by using the Offline API.

You simply add to your config

"output": "my-output-path" 

and for each steps the observations, actions, etc. get logged and can be read in by using the JsonReader.

Hi @Aceticia,

I do not know of a more direct method.
I would say using on_episode_end or another callback is how you are meant to handle your case.

Cheers

1 Like