Here it is a log I tried with PPO and 1 worker/1 env, but got the same error
2022-06-14 18:45:05,932 ERROR syncer.py:147 -- Log sync requires rsync to be installed.
(PPOTrainer pid=30456) 2022-06-14 18:45:09,680 INFO ppo.py:414 -- In multi-agent mode, policies will be optimized sequentially by the multi-GPU optimizer. Consider setting simple_optimizer=True if this doesn't work for you.
(PPOTrainer pid=30456) 2022-06-14 18:45:09,680 INFO trainer.py:903 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
== Status ==
Current time: 2022-06-14 18:45:13 (running for 00:00:07.80)
Memory usage on this node: 15.9/95.9 GiB
Using FIFO scheduling algorithm.
Resources requested: 2.0/32 CPUs, 1.0/1 GPUs, 0.0/49.48 GiB heap, 0.0/24.74 GiB objects
Result logdir: C:\Users\m1\ray_results\run
Number of trials: 1/1 (1 RUNNING)
2022-06-14 18:45:13,587 ERROR trial_runner.py:886 -- Trial PPO_None_f6442_00000: Error processing event.
NoneType: None
== Status ==
Current time: 2022-06-14 18:45:13 (running for 00:00:07.80)
Memory usage on this node: 15.9/95.9 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/32 CPUs, 0/1 GPUs, 0.0/49.48 GiB heap, 0.0/24.74 GiB objects
Result logdir: C:\Users\m1\ray_results\run
Number of trials: 1/1 (1 ERROR)
Number of errored trials: 1
+----------------------+--------------+-----------------------------------------------------------------------------------+
| Trial name | # failures | error file
|
|----------------------+--------------+-----------------------------------------------------------------------------------|
| PPO_None_f6442_00000 | 1 | C:\Users\m1\ray_results\run\PPO_None_f6442_00000_0_2022-06-14_18-45-05\error.txt |
+----------------------+--------------+-----------------------------------------------------------------------------------+
== Status ==
Current time: 2022-06-14 18:45:13 (running for 00:00:07.80)
Memory usage on this node: 15.9/95.9 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/32 CPUs, 0/1 GPUs, 0.0/49.48 GiB heap, 0.0/24.74 GiB objects
Result logdir: C:\Users\m1\ray_results\run
Number of trials: 1/1 (1 ERROR)
Number of errored trials: 1
+----------------------+--------------+-----------------------------------------------------------------------------------+
| Trial name | # failures | error file
|
|----------------------+--------------+-----------------------------------------------------------------------------------|
| PPO_None_f6442_00000 | 1 | C:\Users\m1\ray_results\run\PPO_None_f6442_00000_0_2022-06-14_18-45-05\error.txt |
+----------------------+--------------+-----------------------------------------------------------------------------------+
2022-06-14 18:45:13,594 ERROR ray_trial_executor.py:107 -- An exception occurred when trying to stop the Ray actor:Traceback (most recent call last):
File "D:\Proj\.env\lib\site-packages\ray\tune\ray_trial_executor.py", line 98, in post_stop_cleanup
ray.get(future, timeout=0)
File "D:\Proj\.env\lib\site-packages\ray\_private\client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "D:\Proj\.env\lib\site-packages\ray\worker.py", line 1833, in get
raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::PPOTrainer.__init__() (pid=30456, ip=127.0.0.1, repr=PPOTrainer)
File "D:\Proj\.env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
return method(self, *_args, **_kwargs)
File "D:\Proj\.env\lib\site-packages\ray\rllib\agents\trainer.py", line 1074, in _init
raise NotImplementedError
NotImplementedError
During handling of the above exception, another exception occurred:
ray::PPOTrainer.__init__() (pid=30456, ip=127.0.0.1, repr=PPOTrainer)
File "python\ray\_raylet.pyx", line 658, in ray._raylet.execute_task
File "python\ray\_raylet.pyx", line 699, in ray._raylet.execute_task
File "python\ray\_raylet.pyx", line 665, in ray._raylet.execute_task
File "python\ray\_raylet.pyx", line 669, in ray._raylet.execute_task
File "python\ray\_raylet.pyx", line 616, in ray._raylet.execute_task.function_executor
File "D:\Proj\.env\lib\site-packages\ray\_private\function_manager.py", line 675, in actor_method_executor
return method(__ray_actor, *args, **kwargs)
File "D:\Proj\.env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
return method(self, *_args, **_kwargs)
File "D:\Proj\.env\lib\site-packages\ray\rllib\agents\trainer.py", line 870, in __init__
super().__init__(
File "D:\Proj\.env\lib\site-packages\ray\tune\trainable.py", line 156, in __init__
self.setup(copy.deepcopy(self.config))
File "D:\Proj\.env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
return method(self, *_args, **_kwargs)
File "D:\Proj\.env\lib\site-packages\ray\rllib\agents\trainer.py", line 950, in setup
self.workers = WorkerSet(
File "D:\Proj\.env\lib\site-packages\ray\rllib\evaluation\worker_set.py", line 142, in __init__
remote_spaces = ray.get(
File "D:\Proj\.env\lib\site-packages\ray\_private\client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "D:\Proj\.env\lib\site-packages\ray\worker.py", line 1833, in get
raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=31592, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002E1FEE24B50>)
File "python\ray\_raylet.pyx", line 665, in ray._raylet.execute_task
File "python\ray\_raylet.pyx", line 669, in ray._raylet.execute_task
File "python\ray\_raylet.pyx", line 616, in ray._raylet.execute_task.function_executor
File "D:\Proj\.env\lib\site-packages\ray\_private\function_manager.py", line 675, in actor_method_executor
return method(__ray_actor, *args, **kwargs)
File "D:\Proj\.env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
return method(self, *_args, **_kwargs)
File "D:\Proj\.env\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 573, in __init__
self.policy_dict = _determine_spaces_for_multi_agent_dict(
File "D:\Proj\.env\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 1937, in _determine_spaces_for_multi_agent_dict
raise ValueError(
ValueError: `observation_space` not provided in PolicySpec for default_policy and env does not have an observation space OR no spaces received from other workers' env(s) OR no `observation_space` specified in config!
(PPOTrainer pid=30456) 2022-06-14 18:45:13,583 ERROR worker.py:451 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::PPOTrainer.__init__() (pid=30456, ip=127.0.0.1, repr=PPOTrainer)
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
(PPOTrainer pid=30456) return method(self, *_args, **_kwargs)
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\rllib\agents\trainer.py", line 1074, in _init
(PPOTrainer pid=30456) raise NotImplementedError
(PPOTrainer pid=30456) NotImplementedError
(PPOTrainer pid=30456)
(PPOTrainer pid=30456) During handling of the above exception, another exception occurred:
(PPOTrainer pid=30456)
(PPOTrainer pid=30456) ray::PPOTrainer.__init__() (pid=30456, ip=127.0.0.1, repr=PPOTrainer)
(PPOTrainer pid=30456) File "python\ray\_raylet.pyx", line 658, in ray._raylet.execute_task
(PPOTrainer pid=30456) File "python\ray\_raylet.pyx", line 699, in ray._raylet.execute_task
(PPOTrainer pid=30456) File "python\ray\_raylet.pyx", line 665, in ray._raylet.execute_task
(PPOTrainer pid=30456) File "python\ray\_raylet.pyx", line 669, in ray._raylet.execute_task
(PPOTrainer pid=30456) File "python\ray\_raylet.pyx", line 616, in ray._raylet.execute_task.function_executor
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\_private\function_manager.py", line 675, in actor_method_executor
(PPOTrainer pid=30456) return method(__ray_actor, *args, **kwargs)
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
(PPOTrainer pid=30456) return method(self, *_args, **_kwargs)
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\rllib\agents\trainer.py", line 870, in __init__
(PPOTrainer pid=30456) super().__init__(
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\tune\trainable.py", line 156, in __init__
(PPOTrainer pid=30456) self.setup(copy.deepcopy(self.config))
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
(PPOTrainer pid=30456) return method(self, *_args, **_kwargs)
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\rllib\agents\trainer.py", line 950, in setup
(PPOTrainer pid=30456) self.workers = WorkerSet(
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\rllib\evaluation\worker_set.py", line 142, in __init__
(PPOTrainer pid=30456) remote_spaces = ray.get(
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\_private\client_mode_hook.py", line 105, in wrapper
(PPOTrainer pid=30456) return func(*args, **kwargs)
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\worker.py", line 1833, in get
Traceback (most recent call last):
File "D:\Proj\kub_IMPALA.py", line 93, in <module>
(PPOTrainer pid=30456) raise value
result = tune.run("PPO", "run", config=cfg, verbose=1)
File "D:\Proj\.env\lib\site-packages\ray\tune\tune.py", line 741, in run
(PPOTrainer pid=30456) ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, raise TuneError("Trials did not complete", incomplete_trials)
ray::RolloutWorker.__init__()ray.tune.error. (pid=31592, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002E1FEE24B50>)TuneError
: ('Trials did not complete', [PPO_None_f6442_00000])
(PPOTrainer pid=30456) File "python\ray\_raylet.pyx", line 665, in ray._raylet.execute_task
(PPOTrainer pid=30456) File "python\ray\_raylet.pyx", line 669, in ray._raylet.execute_task
(PPOTrainer pid=30456) File "python\ray\_raylet.pyx", line 616, in ray._raylet.execute_task.function_executor
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\_private\function_manager.py", line 675, in actor_method_executor
(PPOTrainer pid=30456) return method(__ray_actor, *args, **kwargs)
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
(PPOTrainer pid=30456) return method(self, *_args, **_kwargs)
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 573, in __init__
(PPOTrainer pid=30456) self.policy_dict = _determine_spaces_for_multi_agent_dict(
(PPOTrainer pid=30456) File "D:\Proj\.env\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 1937, in _determine_spaces_for_multi_agent_dict
(PPOTrainer pid=30456) raise ValueError(
(PPOTrainer pid=30456) ValueError: `observation_space` not provided in PolicySpec for default_policy and env does not have an observation space OR no spaces received from other workers' env(s) OR no `observation_space` specified in config!
(RolloutWorker pid=31592) 2022-06-14 18:45:13,577 ERROR worker.py:451 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=31592, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002E1FEE24B50>)
(RolloutWorker pid=31592) File "python\ray\_raylet.pyx", line 665, in ray._raylet.execute_task
(RolloutWorker pid=31592) File "python\ray\_raylet.pyx", line 669, in ray._raylet.execute_task
(RolloutWorker pid=31592) File "python\ray\_raylet.pyx", line 616, in ray._raylet.execute_task.function_executor
(RolloutWorker pid=31592) File "D:\Proj\.env\lib\site-packages\ray\_private\function_manager.py", line 675, in actor_method_executor
(RolloutWorker pid=31592) return method(__ray_actor, *args, **kwargs)
(RolloutWorker pid=31592) File "D:\Proj\.env\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span
(RolloutWorker pid=31592) return method(self, *_args, **_kwargs)
(RolloutWorker pid=31592) File "D:\Proj\.env\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 573, in __init__
(RolloutWorker pid=31592) self.policy_dict = _determine_spaces_for_multi_agent_dict(
(RolloutWorker pid=31592) File "D:\Proj\.env\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 1937, in _determine_spaces_for_multi_agent_dict
(RolloutWorker pid=31592) raise ValueError(
(RolloutWorker pid=31592) ValueError: `observation_space` not provided in PolicySpec for default_policy and env does not have an observation space OR no spaces received from other workers' env(s) OR no `observation_space` specified in config!
(pid=) 2022-06-14 18:45:14,113 INFO context.py:67 -- Exec'ing worker with command: "D:\Proj\.env\Scripts\python.exe" D:\Proj\.env\lib\site-packages\ray\workers/default_worker.py --node-ip-address=127.0.0.1 --node-manager-port=57762 --object-store-name=tcp://127.0.0.1:64785 --raylet-name=tcp://127.0.0.1:58844 --redis-address=None --storage=None --temp-dir=C:\Users\m1\AppData\Local\Temp\ray --metrics-agent-port=59156 --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 --gcs-address=127.0.0.1:59770 --redis-password=5241590000000000 --startup-token=32 --runtime-env-hash=185949076