Failure # 1 (occurred at 2022-08-30_03-50-20)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/ray/tune/trial_runner.py", line 886, in _process_trial
results = self.trial_executor.fetch_result(trial)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py", line 675, in fetch_result
result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)
File "/usr/local/lib/python3.6/dist-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ray/worker.py", line 1765, in get
raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, e[36mray::PPOTrainer.__init__()e[39m (pid=166, ip=192.168.0.42, repr=PPOTrainer)
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/agents/trainer.py", line 925, in _init
raise NotImplementedError
NotImplementedError
During handling of the above exception, another exception occurred:
e[36mray::PPOTrainer.__init__()e[39m (pid=166, ip=192.168.0.42, repr=PPOTrainer)
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/agents/trainer.py", line 747, in __init__
sync_function_tpl)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py", line 124, in __init__
self.setup(copy.deepcopy(self.config))
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/agents/trainer.py", line 827, in setup
num_workers=self.config["num_workers"])
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/agents/trainer.py", line 2002, in _make_workers
logdir=self.logdir,
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/evaluation/worker_set.py", line 103, in __init__
lambda p, pid: (pid, p.observation_space, p.action_space)))
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, e[36mray::RolloutWorker.__init__()e[39m (pid=164, ip=192.168.0.42, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f0b11b84cf8>)
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 590, in __init__
seed=seed)
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 1578, in _build_policy_map
conf, merged_conf)
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/policy/policy_map.py", line 134, in create_policy
observation_space, action_space, merged_config)
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/policy/tf_policy_template.py", line 252, in __init__
get_batch_divisibility_req=get_batch_divisibility_req,
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 334, in __init__
self._input_dict)
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/models/modelv2.py", line 232, in __call__
input_dict["obs"], self.obs_space, self.framework)
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/models/modelv2.py", line 394, in restore_original_dimensions
return _unpack_obs(obs, original_space, tensorlib=tensorlib)
File "/usr/local/lib/python3.6/dist-packages/ray/rllib/models/modelv2.py", line 430, in _unpack_obs
prep.shape[0], obs.shape))
ValueError: Expected flattened obs shape of [..., 10], got (?, 6)
Also when I tried to upgrade to Ray 2.0.0 from Ray 1.11, there are multiple other errors popping up and decided to go back to the safety of Ray 1.11. Any guide to upgrade from previous versions of Ray to Ray 2.0.0 - or should i make a new post about this?
EDIT
I had made some changes to my observations as well, and that seems to have an effect on the shape even though i previously thought it was the same shape.