Need some help with Actor died an error in its creation

Hey i am working on RL approach to solve an optimization problem called Pickup and delivery problem, i am working on Custom environment, once i finished and started testing the environment i keep getting this error :
2024-02-19 15:35:00,727 INFO worker.py:1724 – Started a local Ray instance.
2024-02-19 15:35:03,438 INFO tune.py:592 – [output] This will use the new output engine with verbosity 2. To disable the new output and use the legacy output engine, set the environment variable RAY_AIR_NEW_OUTPUT=0. For more information, please see
±-----------------------------------------------------------+
| Configuration for experiment PPO_2024-02-19_15-35-03 |
±-----------------------------------------------------------+
| Search algorithm BasicVariantGenerator |
| Scheduler FIFOScheduler |
| Number of trials 1 |
±-----------------------------------------------------------+

View detailed results here: …ray_results/PPO_2024-02-19_15-35-03
To visualize your results with TensorBoard, run: tensorboard --logdir ...ray_results/PPO_2024-02-19_15-35-03

Trial status: 1 PENDING
Current time: 2024-02-19 15:35:03. Total running time: 0s
Logical resource usage: 0/8 CPUs, 0/0 GPUs
±-----------------------------------------------+
| Trial name status |
±-----------------------------------------------+
| PPO_env-v0_11997_00000 PENDING |
±-----------------------------------------------+
Trial status: 1 PENDING
Current time: 2024-02-19 15:35:33. Total running time: 30s
Logical resource usage: 2.0/8 CPUs, 0/0 GPUs
±-----------------------------------------------+
| Trial name status |
±-----------------------------------------------+
| PPO_env-v0_11997_00000 PENDING |
±-----------------------------------------------+
(RolloutWorker pid=10764) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=10764, ip=127.0.0.1, actor_id=f78a5dd8f49d0134008c941b01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002D79530A790>)
(RolloutWorker pid=10764) File “…anaconda3\envs\myenv\lib\site-packages\gymnasium\envs\registration.py”, line 740, in make
(RolloutWorker pid=10764) env_spec = _find_spec(id)
(RolloutWorker pid=10764) File “…anaconda3\envs\myenv\lib\site-packages\gymnasium\envs\registration.py”, line 537, in _find_spec
(RolloutWorker pid=10764) _check_version_exists(ns, name, version)
(RolloutWorker pid=10764) File “…anaconda3\envs\myenv\lib\site-packages\gymnasium\envs\registration.py”, line 403, in _check_version_exists
(RolloutWorker pid=10764) _check_name_exists(ns, name)
(RolloutWorker pid=10764) File “…anaconda3\envs\myenv\lib\site-packages\gymnasium\envs\registration.py”, line 380, in _check_name_exists
(RolloutWorker pid=10764) raise error.NameNotFound(
(RolloutWorker pid=10764) gymnasium.error.NameNotFound: Environment env doesn’t exist.
(RolloutWorker pid=10764)
(RolloutWorker pid=10764) During handling of the above exception, another exception occurred:
(RolloutWorker pid=10764)
(RolloutWorker pid=10764) ray::RolloutWorker.init() (pid=10764, ip=127.0.0.1, actor_id=f78a5dd8f49d0134008c941b01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002D79530A790>)
(RolloutWorker pid=10764) File “python\ray_raylet.pyx”, line 1807, in ray._raylet.execute_task
(RolloutWorker pid=10764) File “python\ray_raylet.pyx”, line 1908, in ray._raylet.execute_task
(RolloutWorker pid=10764) File “python\ray_raylet.pyx”, line 1813, in ray._raylet.execute_task
(RolloutWorker pid=10764) File “python\ray_raylet.pyx”, line 1754, in ray._raylet.execute_task.function_executor
(RolloutWorker pid=10764) File “…anaconda3\envs\myenv\lib\site-packages\ray_private\function_manager.py”, line 726, in actor_method_executor
(RolloutWorker pid=10764) return method(__ray_actor, *args, **kwargs)
(RolloutWorker pid=10764) File “…anaconda3\envs\myenv\lib\site-packages\ray\util\tracing\tracing_helper.py”, line 467, in _resume_span
(RolloutWorker pid=10764) return method(self, *_args, **_kwargs)
(RolloutWorker pid=10764) File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\evaluation\rollout_worker.py”, line 407, in init
(RolloutWorker pid=10764) self.env = env_creator(copy.deepcopy(self.env_context))
(RolloutWorker pid=10764) File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\env\utils.py”, line 177, in _gym_env_creator
(RolloutWorker pid=10764) raise EnvError(ERR_MSG_INVALID_ENV_DESCRIPTOR.format(env_descriptor))
(RolloutWorker pid=10764) ray.rllib.utils.error.EnvError: The env string you provided (‘env-v0’) is:
(RolloutWorker pid=10764) a) Not a supported/installed environment.
(RolloutWorker pid=10764) b) Not a tune-registered environment creator.
(RolloutWorker pid=10764) c) Not a valid env class string.
(RolloutWorker pid=10764)
(RolloutWorker pid=10764) Try one of the following:
(RolloutWorker pid=10764) a) For Atari support: pip install gym[atari] autorom[accept-rom-license].
(RolloutWorker pid=10764) For VizDoom support: Install VizDoom
(RolloutWorker pid=10764) and
(RolloutWorker pid=10764) pip install vizdoomgym.
(RolloutWorker pid=10764) For PyBullet support: pip install pybullet.
(RolloutWorker pid=10764) b) To register your custom env, do from ray import tune; (RolloutWorker pid=10764) tune.register('[name]', lambda cfg: [return env obj from here using cfg]).
(RolloutWorker pid=10764) Then in your config, do config['env'] = [name].
(RolloutWorker pid=10764) c) Make sure you provide a fully qualified classpath, e.g.:
(RolloutWorker pid=10764) ray.rllib.examples.env.repeat_after_me_env.RepeatAfterMeEnv
(PPO pid=12740) 2024-02-19 15:35:40,417 ERROR actor_manager.py:506 – Ray error, taking actor 1 out of service. The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=10764, ip=127.0.0.1, actor_id=f78a5dd8f49d0134008c941b01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002D79530A790>)
(PPO pid=12740) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::PPO.init() (pid=12740, ip=127.0.0.1, actor_id=086f92c67b588b30358ab53d01000000, repr=PPO)
(PPO pid=12740) File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\evaluation\worker_set.py”, line 229, in _setup
(PPO pid=12740) self.add_workers(
(PPO pid=12740) File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\evaluation\worker_set.py”, line 616, in add_workers
(PPO pid=12740) raise result.get()
(PPO pid=12740) File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\utils\actor_manager.py”, line 487, in __fetch_result
(PPO pid=12740) result = ray.get(r)
(PPO pid=12740) File “…anaconda3\envs\myenv\lib\site-packages\ray_private\auto_init_hook.py”, line 22, in auto_init_wrapper
(PPO pid=12740) return fn(*args, **kwargs)
(PPO pid=12740) File “…anaconda3\envs\myenv\lib\site-packages\ray_private\client_mode_hook.py”, line 103, in wrapper
(PPO pid=12740) return func(*args, **kwargs)
(PPO pid=12740) File “…anaconda3\envs\myenv\lib\site-packages\ray_private\worker.py”, line 2626, in get
(PPO pid=12740) raise value
(PPO pid=12740) ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=10764, ip=127.0.0.1, actor_id=f78a5dd8f49d0134008c941b01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002D79530A790>)
(PPO pid=12740) ray::PPO.init() (pid=12740, ip=127.0.0.1, actor_id=086f92c67b588b30358ab53d01000000, repr=PPO)
(PPO pid=12740) super().init(
(PPO pid=12740) self.setup(copy.deepcopy(self.config))
(PPO pid=12740) File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\algorithms\algorithm.py”, line 638, in setup
(PPO pid=12740) self.workers = WorkerSet(
(PPO pid=12740) raise e.args[0].args[2]
2024-02-19 15:35:40,467 ERROR tune_controller.py:1374 – Trial task failed for trial PPO_env-v0_11997_00000
Traceback (most recent call last):
File “…anaconda3\envs\myenv\lib\site-packages\ray\air\execution_internal\event_manager.py”, line 110, in resolve_future
result = ray.get(future)
File “…anaconda3\envs\myenv\lib\site-packages\ray_private\auto_init_hook.py”, line 22, in auto_init_wrapper
return fn(*args, **kwargs)
File “…anaconda3\envs\myenv\lib\site-packages\ray_private\client_mode_hook.py”, line 103, in wrapper
return func(*args, **kwargs)
File “…anaconda3\envs\myenv\lib\site-packages\ray_private\worker.py”, line 2626, in get
raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::PPO.init() (pid=12740, ip=127.0.0.1, actor_id=086f92c67b588b30358ab53d01000000, repr=PPO)
File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\evaluation\worker_set.py”, line 229, in _setup
self.add_workers(
File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\evaluation\worker_set.py”, line 616, in add_workers
raise result.get()
File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\utils\actor_manager.py”, line 487, in __fetch_result
result = ray.get(r)
File “…anaconda3\envs\myenv\lib\site-packages\ray_private\auto_init_hook.py”, line 22, in auto_init_wrapper
return fn(*args, **kwargs)
File “…anaconda3\envs\myenv\lib\site-packages\ray_private\client_mode_hook.py”, line 103, in wrapper
return func(*args, **kwargs)
File “…anaconda3\envs\myenv\lib\site-packages\ray_private\worker.py”, line 2626, in get
raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=10764, ip=127.0.0.1, actor_id=f78a5dd8f49d0134008c941b01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002D79530A790>)
File “…anaconda3\envs\myenv\lib\site-packages\gymnasium\envs\registration.py”, line 740, in make
env_spec = _find_spec(id)
File “…anaconda3\envs\myenv\lib\site-packages\gymnasium\envs\registration.py”, line 537, in _find_spec
_check_version_exists(ns, name, version)
File “…anaconda3\envs\myenv\lib\site-packages\gymnasium\envs\registration.py”, line 403, in _check_version_exists
_check_name_exists(ns, name)
File “…anaconda3\envs\myenv\lib\site-packages\gymnasium\envs\registration.py”, line 380, in _check_name_exists
raise error.NameNotFound(
gymnasium.error.NameNotFound: Environment env doesn’t exist.

During handling of the above exception, another exception occurred:

ray::RolloutWorker.init() (pid=10764, ip=127.0.0.1, actor_id=f78a5dd8f49d0134008c941b01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002D79530A790>)
File “python\ray_raylet.pyx”, line 1807, in ray._raylet.execute_task
File “python\ray_raylet.pyx”, line 1908, in ray._raylet.execute_task
File “python\ray_raylet.pyx”, line 1813, in ray._raylet.execute_task
File “python\ray_raylet.pyx”, line 1754, in ray._raylet.execute_task.function_executor
File “…anaconda3\envs\myenv\lib\site-packages\ray_private\function_manager.py”, line 726, in actor_method_executor
return method(__ray_actor, *args, **kwargs)
File “…anaconda3\envs\myenv\lib\site-packages\ray\util\tracing\tracing_helper.py”, line 467, in _resume_span
return method(self, *_args, **_kwargs)
File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\evaluation\rollout_worker.py”, line 407, in init
self.env = env_creator(copy.deepcopy(self.env_context))
File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\env\utils.py”, line 177, in _gym_env_creator
raise EnvError(ERR_MSG_INVALID_ENV_DESCRIPTOR.format(env_descriptor))
ray.rllib.utils.error.EnvError: The env string you provided (‘env-v0’) is:
a) Not a supported/installed environment.
b) Not a tune-registered environment creator.
c) Not a valid env class string.

Try one of the following:
a) For Atari support: pip install gym[atari] autorom[accept-rom-license].
For VizDoom support: Install VizDoom
and
pip install vizdoomgym.
For PyBullet support: pip install pybullet.
b) To register your custom env, do from ray import tune; tune.register('[name]', lambda cfg: [return env obj from here using cfg]).
Then in your config, do config['env'] = [name].
c) Make sure you provide a fully qualified classpath, e.g.:
ray.rllib.examples.env.repeat_after_me_env.RepeatAfterMeEnv

During handling of the above exception, another exception occurred:

ray::PPO.init() (pid=12740, ip=127.0.0.1, actor_id=086f92c67b588b30358ab53d01000000, repr=PPO)
File “python\ray_raylet.pyx”, line 1807, in ray._raylet.execute_task
File “python\ray_raylet.pyx”, line 1908, in ray._raylet.execute_task
File “python\ray_raylet.pyx”, line 1813, in ray._raylet.execute_task
File “python\ray_raylet.pyx”, line 1754, in ray._raylet.execute_task.function_executor
File “…anaconda3\envs\myenv\lib\site-packages\ray_private\function_manager.py”, line 726, in actor_method_executor
return method(__ray_actor, *args, **kwargs)
File “…anaconda3\envs\myenv\lib\site-packages\ray\util\tracing\tracing_helper.py”, line 467, in _resume_span
return method(self, *_args, **_kwargs)
File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\algorithms\algorithm.py”, line 516, in init
super().init(
File “…anaconda3\envs\myenv\lib\site-packages\ray\tune\trainable\trainable.py”, line 161, in init
self.setup(copy.deepcopy(self.config))
File “…anaconda3\envs\myenv\lib\site-packages\ray\util\tracing\tracing_helper.py”, line 467, in _resume_span
return method(self, *_args, **_kwargs)
File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\algorithms\algorithm.py”, line 638, in setup
self.workers = WorkerSet(
File “…anaconda3\envs\myenv\lib\site-packages\ray\rllib\evaluation\worker_set.py”, line 181, in init
raise e.args[0].args[2]
ray.rllib.utils.error.EnvError: The env string you provided (‘env-v0’) is:
a) Not a supported/installed environment.
b) Not a tune-registered environment creator.
c) Not a valid env class string.

Try one of the following:
a) For Atari support: pip install gym[atari] autorom[accept-rom-license].
For VizDoom support: Install VizDoom
and
pip install vizdoomgym.
For PyBullet support: pip install pybullet.
b) To register your custom env, do from ray import tune; tune.register('[name]', lambda cfg: [return env obj from here using cfg]).
Then in your config, do config['env'] = [name].
c) Make sure you provide a fully qualified classpath, e.g.:
ray.rllib.examples.env.repeat_after_me_env.RepeatAfterMeEnv

Trial PPO_env-v0_11997_00000 errored after 0 iterations at 2024-02-19 15:35:40. Total running time: 36s
Error file: …ray_results/PPO_2024-02-19_15-35-03/PPO_env-v0_11997_00000_0_2024-02-19_15-35-03\error.txt

Trial status: 1 ERROR
Current time: 2024-02-19 15:35:40. Total running time: 36s
Logical resource usage: 0/8 CPUs, 0/0 GPUs
±-----------------------------------------------+
| Trial name status |
±-----------------------------------------------+
| PPO_env-v0_11997_00000 ERROR |
±-----------------------------------------------+

Number of errored trials: 1
±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Trial name # failures error file |
±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PPO_env-v0_11997_00000 1 …ray_results/PPO_2024-02-19_15-35-03/PPO_env-v0_11997_00000_0_2024-02-19_15-35-03\error.txt |
±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Traceback (most recent call last):

File ~\anaconda3\envs\myenv\lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)

File …\desktop\rl project\ray agent.py:45
results = tune.run(

File ~\anaconda3\envs\myenv\lib\site-packages\ray\tune\tune.py:1036 in run
raise TuneError(“Trials did not complete”, incomplete_trials)

TuneError: (‘Trials did not complete’, [PPO_env-v0_11997_00000])

@BenRL welcome to the forum.

It is hard to guess what’s wrong without a reproducable example. But the error is at least described in the error message:

a) Not a supported/installed environment.
b) Not a tune-registered environment creator.
c) Not a valid env class string.

Try one of the following. 
a) For Atari support: pip install gym[atari] autorom[accept-rom-license].
For VizDoom support: Install VizDoom
and ... 

Some approaches are suggested in the error message as well. Did you try them?

i tried (b) since (a) doesn’t work for me, am not working with atari games , custom environment for pickup and delivery problem, instead of the usual meta-heuristics used it for this type of problem i switched it to RL.
(c) i am not sure about, i have defined my observation space and action space in init , and in my step function it returns observation reward done info , with observation is a dictionary of numpy array’s , the action space is a dictionary as well of action of Discrete spaces with max values like this :
self.action_space = spaces.Dict({
“assign_request”: spaces.Discrete(max_requests),
“vehicle_movement”: spaces.Discrete(max_nodes),
“pickup”: spaces.Discrete(max_pickup_delivery),
“deliver”: spaces.Discrete(max_pickup_delivery),
“return_depot” : spaces.Discrete(max_vehicles)
})
my observation space i am not sure about :
self.observation_space = spaces.Dict({
“vehicle_info”: spaces.Box(low=np.float64(-np.inf), high=np.float64(np.inf), shape=(max_vehicles,), dtype=np.float64),
“request_info”: spaces.Box(low=np.float64(-np.inf), high=np.float64(np.inf), shape=(max_requests,), dtype=np.float64),
“current_time”: spaces.Box(low=0, high=np.float64(np.inf), shape=(1,), dtype=np.float64),
“node_info”: spaces.Box(low=np.float64(-np.inf), high=np.float64(np.inf), shape=(len(self.all_nodes_np),), dtype=np.float64),
“state” : spaces.Box(low=np.float64(-np.inf), high=np.float64(np.inf), shape=(len(self.state),), dtype=np.float64),
})
the step function returns this as observation :
observation = {
“vehicle_info”: self.get_vehicle_info_array(),
“request_info”: self.get_request_info_array(),
“current_time”: np.array([self.current_time], dtype=np.float64),
“node_info”: self.get_node_info_array(),
“state”: self.get_state_array(),

    }    
    done = self.check_if_done()
    info ={}
    return observation, reward, done, info
and i have a reset function that returns this :

"
initial_observation = self.state[‘previous_state’]
# Return the initial state of the environment
return initial_observation

i am not sure if it is a problem of nomenclature or why it is not registering the environment

so basically the environment is completely from scratch and built custom for my problem so maybe the issue is in support , but i have all of the needed function defined observation and action space a reset function and a step function, could it be detecting an internal problem before training even begun

RLlib cannot find a way to start your environment as you have not told it how to. You might need to register the environment first (with tune.register_env). You might also want to take a look into this thread: Registering Custom Env That Passes an Argument in 1.13.0

A reproducable example is needed to tell more about where the error might be.

Came across this randomly. I had the same problem when I started so thought it is good to share what I went through.

As @Lars_Simon_Zehnder has already helpfully highlighted, the key error message seems to be this:

(RolloutWorker pid=10764) gymnasium.error.NameNotFound: Environment env doesn’t exist.

You can consider the following code:

from ray.tune.registry import register_env
register_env(“env”, your class name)

Assume you resolved this already by now - hope to benefit subsequent readers of this.