Having trouble registering custom environment in Ray 2.0.0

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

It was working fine in Ray 1.11, and now I cannot figure out why it does not seem to have access to the ‘config’ variable in my custom environment? The full stacktrace is as follows:

2022-09-04 09:36:45,434	INFO worker.py:1515 -- Started a local Ray instance. View the dashboard at http://127.0.0.1:8265 
2022-09-04 09:36:46,509	INFO algorithm.py:1872 -- Your framework setting is 'tf', meaning you are using static-graph mode. Set framework='tf2' to enable eager execution with tf2.x. You may also then want to set eager_tracing=True in order to reach similar execution speed as with static-graph mode.
(RolloutWorker pid=20704) 2022-09-04 09:36:48,826	ERROR worker.py:756 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=20704, ip=192.168.0.42, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f44d077a400>)
(RolloutWorker pid=20704)   File "/usr/local/lib/python3.6/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 490, in __init__
(RolloutWorker pid=20704)     self.env = env_creator(copy.deepcopy(self.env_context))
(RolloutWorker pid=20704)   File "/.../marlenvironment.py", line 40, in env_creator
(RolloutWorker pid=20704)     return SUMOTestMultiAgentEnv(config)
(RolloutWorker pid=20704)   File "/.../marlenvironment.py", line 338, in __init__
(RolloutWorker pid=20704)     self.agent_tags = deepcopy(self._config['agent_ids'])
(RolloutWorker pid=20704) KeyError: 'agent_ids'
(RolloutWorker pid=20704) Exception ignored in: <bound method SUMOTestMultiAgentEnv.__del__ of <marlenvironment.SUMOTestMultiAgentEnv object at 0x7f42911cc4a8>>
(RolloutWorker pid=20704) Traceback (most recent call last):
(RolloutWorker pid=20704)   File "/.../marlenvironment.py", line 397, in __del__
(RolloutWorker pid=20704)     if self.simulation:
(RolloutWorker pid=20704) AttributeError: 'SUMOTestMultiAgentEnv' object has no attribute 'simulation'
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/ray/rllib/algorithms/algorithm.py", line 425, in setup
    logdir=self.logdir,
  File "/usr/local/lib/python3.6/dist-packages/ray/rllib/evaluation/worker_set.py", line 127, in __init__
    validate=trainer_config.get("validate_workers_after_construction"),
  File "/usr/local/lib/python3.6/dist-packages/ray/rllib/evaluation/worker_set.py", line 269, in add_workers
    self.foreach_worker(lambda w: w.assert_healthy())
  File "/usr/local/lib/python3.6/dist-packages/ray/rllib/evaluation/worker_set.py", line 391, in foreach_worker
    remote_results = ray.get([w.apply.remote(func) for w in self.remote_workers()])
  File "/usr/local/lib/python3.6/dist-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/ray/_private/worker.py", line 2277, in get
    raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=20704, ip=192.168.0.42, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f44d077a400>)
  File "/usr/local/lib/python3.6/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 490, in __init__
    self.env = env_creator(copy.deepcopy(self.env_context))
  File "/.../marlenvironment.py", line 40, in env_creator
    return SUMOTestMultiAgentEnv(config)
  File "/.../marlenvironment.py", line 338, in __init__
    self.agent_tags = deepcopy(self._config['agent_ids'])
KeyError: 'agent_ids'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 287, in <module>
    _main()
  File "train.py", line 282, in _main
    my_trainer = my_ppo_config.build()
  File "/usr/local/lib/python3.6/dist-packages/ray/rllib/algorithms/algorithm_config.py", line 310, in build
    logger_creator=self.logger_creator,
  File "/usr/local/lib/python3.6/dist-packages/ray/rllib/algorithms/algorithm.py", line 308, in __init__
    super().__init__(config=config, logger_creator=logger_creator, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/ray/tune/trainable/trainable.py", line 157, in __init__
    self.setup(copy.deepcopy(self.config))
  File "/usr/local/lib/python3.6/dist-packages/ray/rllib/algorithms/algorithm.py", line 443, in setup
    raise e.args[0].args[2]
KeyError: 'agent_ids'

Could this line be an issue?

In Ray 1.11:

from ray.rllib.agents.ppo import ppo
.
.
.
policies['my_ppo'] = (ppo.PPOTFPolicy,
                           marl_env.get_obs_space('agent_0'),
                           marl_env.get_action_space('agent_0'),
                           {})

to

from ray.rllib.algorithms.ppo import PPO, PPOConfig
.
.
.
policies['my_ppo'] = (PPO,
                           marl_env.get_obs_space('agent_0'),
                           marl_env.get_action_space('agent_0'),
                           {})

in Ray 2.0.0? I replaced the 4th empty argument with the config to no avail.

This is what I use for the env_creator function:

def env_creator(config):
    """ Environment creator used in the environment registration. """
    return SUMOTestMultiAgentEnv(config)

Please help me resolve this

  • You can still access self._config, but it does not have the agent_ids key, which is not used as a config key in RLlib. The error you have posted shows that you indeed have access to the _config variable.

  • The way you use the env_creator is fine.

  • ppo.PPOTFPolicy in Ray 1.x != PPO in Ray 2.0. PPO is the Algorithm class, not the policy class.

Hello, thank you for your response.

I have also tried commenting out the agent ids line and just accessing data from inside _config…but it seems to be empty and anything inside _config gives me the same Key error.

If ppo.PPOTFPolicy in Ray 1.x != PPO in Ray 2.0, then for the first argument of the ‘policies’ - which is just noted as policy_cls in the source code, what should I replace it with in Ray 2.0.?

Please advise

  • Your config has a config.environment() method. This method takes a config as a parameter that will not be filtered. You should later be able to access anything that you put in there. My guess is that you probably can’t access anything in the config because you did not put the config where it should be and so it does not end up inside the environment.

  • PPOTFPolicy or PPOTorchPolicy (from ray.rllib.algorithms.ppo.ppo_torch_policy import PPOTorchPolicy)

1 Like

Thank you, I did forget to include anything in the config parameter for .environment() section. And yes, I now imported PPOTF1Policy just for this! It seems to working as before now, only time will tell!

1 Like