Trial Name in custom env / on_episode_start

Hi,

Am trying to save rewards / other metrics for each timestep in a custom env [currently have access to worker_index & vector_index, but couldn’t associate to a specific trial]. Is there a way to get trial_id or trial_name inside the custom env or in default callback “on_episode_start”??

Hi @narik11 ,

so even though your environment is a custom one it will treated by RLlib in the same way as the other environments in regard to logging and checkpointing. If you want to write out results in JSON format for later analysis, take a look at the Output API. Bascially you add an output path to the configuration file of your Trainer or tune.run() and tell where to store results:

    # Specify where experiences should be saved:
    #  - None: don't save any experiences
    #  - "logdir" to save to the agent log dir
    #  - a path/URI to save to a custom output directory (e.g., "s3://bucket/")
    #  - a function that returns a rllib.offline.OutputWriter
    "output": "/home/narik11/ray-results/",
    # What sample batch columns to LZ4 compress in the output data.
    "output_compress_columns": ["obs", "new_obs"],
    # Max output file size before rolling over to a new file.
    "output_max_file_size": 64 * 1024 * 1024,

This will write all important variables from your environment like rewards, observations, etc. to JSON output files. If you want to add further variables from your environment you can use the info dictionary (the info dictionary that gets returned from the step() function) to do so. The info dictionary is also stored to the output files.

If you want to read the output files (they are in JSON format and compressed) take a look at the JSONReader that makes reading these output files comfortable.

Hope this helps

thanks @Lars_Simon_Zehnder for your inputs. i tried the mentioned approach and looks like the output files are created at worker id level. i could append the vector_index (based on “RolloutWorker” and other env specific metrics], but would like to have trial_name or trial_id (appended to the file or within the json) to track metrics @trial level. any ideas??

image

Hi @narik11 ,

as the file names are defined in the _get_file() function of the JsonWriter definition, I am afraid that changing the file names to another format and adding the env metrics might involve writing your own JsonWriter class.

As you mention the trial level: did you check out the callbacks given by RLlib to write your custom metrics at episode level? These custom metrics can then also be used by tune at trial level (e.g. to stop training or to optimize hyperparameters for. These metrics are stored in the checkpoints and can be visualized in TensorBoard.