Restoring APEX_DDPG trainer using checkpoint saved with older ray version


I am trying to upgrade some code that has been using Ray 0.8.1 to Ray 1.3.0.

I am now trying to restore an APEX_DDPG trainer using a checkpoint stored with Ray 0.8.1, but I get the following error:

Traceback (most recent call last):
File "", line 52, in <module>, config)
File "/home/ubuntu/investiva/stocktrade/evaluate/", line 61, in run 
File "/home/ubuntu/miniconda3/envs/pcv/lib/python3.6/site-packages/ray/tune/", line 372, in restore
File "/home/ubuntu/miniconda3/envs/pcv/lib/python3.6/site/packages/ray/rllib/agents/", line 755, in load_checkpoint
File "/home/ubuntu/miniconda3/envs/pcv/lib/python3.6/site-packages/ray/rllib/agents/", line 191, in __setstate__
  Trainer.__setstate__(self, state)
File "/home/ubuntu/miniconda3/envs/pcv/lib/python3.6/site-packages/ray/rllib/agents/", line 1320, in __setstate__
File "/home/ubuntu/miniconda3/envs/pcv/lib/python3.6/site-packages/ray/rllib/evaluation/", line 1061, in restore
File "/home/ubuntu/miniconda3/envs/pcv/lib/python3.6/site-packages/ray/rllib/policy/", line 489, in set_state
  optimizer_vars = state.pop("_optimizer_variables", None)
TypeError: pop() takes at most 1 argument (2 given)

I have found a similar issue here for a different algorithm, which was solved with this PR. Also, when I train (and store checkpoints) using ray.tune(‘APEX_DDPG’) with the current version, I do not get this error when trying to restore the trainer class.

Is this issue related to dict/list type or to an incompatibility with checkpoint restoring between Ray versions?

Thanks in advance for your help.