I’m having some trouble loading a checkpoint generated from a tune experiment. The problem is I introduced the custom environment class without registering it, such as in the example below:
I’m getting some error message related to eager mode from tensorflow, but it is most certainly because the environment cannot be found. Is there any workaround to load the checkpoint? What is the easiest approach to achieve it?
Probably you meant Algorithm.from_checkpoint(). Yes, I’m using it. I also checked the dict that gathers the info to load all the information. The env key has the value <class ‘hrro_env_norm.HRROenv’>. I tried to register the environment with this name, but it’s not working. The ray version I’m using is 2.3.
Sure, I’m importing the environment too. The error message is: ‘tf.enable_eager_execution must be called at program setup’. The original config had eager_tracing = True.