hi… am new to Ray and working on customEnv… the attached code works fine… is there a way i could access the custom class variables / function after the training is complete for debugging purpose?
Hey @narik11 ,
yes you should be able to do that like so:
results = trainer.train()
print(trainer.workers.foreach_env(lambda env: env.metadata))
Note: Your original env class will get wrapped by RLlib: CustomEnv -> rllib.env.VectorEnv -> rllib.env.BaseEnv
.
The BaseEnv will have an get_unwrapped()
method, which returns a list of all (vectorized) CustomEnv
objects used in your particular setup.
By default you have 3 workers (num_workers=2 + the 1 local worker) and 1-vectorization (num_envs_per_worker=1), so there should be 3 CustomEnvs total in your Trainer.