Hi @narik11 ,
so even though your environment is a custom one it will treated by RLlib in the same way as the other environments in regard to logging and checkpointing. If you want to write out results in JSON format for later analysis, take a look at the Output API. Bascially you add an output path to the configuration file of your
tune.run() and tell where to store results:
# Specify where experiences should be saved:
# - None: don't save any experiences
# - "logdir" to save to the agent log dir
# - a path/URI to save to a custom output directory (e.g., "s3://bucket/")
# - a function that returns a rllib.offline.OutputWriter
# What sample batch columns to LZ4 compress in the output data.
"output_compress_columns": ["obs", "new_obs"],
# Max output file size before rolling over to a new file.
"output_max_file_size": 64 * 1024 * 1024,
This will write all important variables from your environment like rewards, observations, etc. to JSON output files. If you want to add further variables from your environment you can use the
info dictionary (the
info dictionary that gets returned from the
step() function) to do so. The
info dictionary is also stored to the output files.
If you want to read the output files (they are in JSON format and compressed) take a look at the
JSONReader that makes reading these output files comfortable.
Hope this helps