Bug report multi_agent configuration

1. Severity of the issue: (select one)
None: I’m just curious or want clarification.
Low: Annoying but doesn’t hinder my work.
Medium: Significantly affects my productivity but can find a workaround.
High: Completely blocks me.

2. Environment:

  • Ray version: 2.44
  • Python version: 3.12
  • OS: windows
  • Cloud/Infrastructure:
  • Other libs/tools (if relevant):

3. What happened vs. what you expected:

  • Expected:
  • Actual:

env = gym.make(“CartPole-v1”)
cfg = (
PPOConfig()
.environment(env=“CartPole-v1”)
.multi_agent(
policies={
“asdf”: (
(
None,
env.observation_space,
env.action_space,
{“model”: [128, 128]},
)
)
},
policy_mapping_fn=lambda agent_id, episode, **kwargs: “asdf”
)
.debugging(log_level=“INFO”)
)
cfg.build_algo()

output:
2025-09-18 10:00:23,712 ERROR actor_manager.py:873 – Ray error (The actor died because of an error raised in its creation task, ray::MultiAgentEnvRunner._init_() (pid=29440, ip=127.0.0.1, actor_id=3a3bb3179ad29cccc9eae6cc01000000, repr=<ray.rllib.env.multi_agent_env_runner.MultiAgentEnvRunner object at 0x000002250DE38250>) File “python\\ray\\_raylet.pyx”, line 1895, in ray._raylet.execute_task File “python\\ray\\_raylet.pyx”, line 1835, in ray._raylet.execute_task.function_executor File “c:\DEC\am\.venv\lib\site-packages\ray\_private\function_manager.py”, line 689, in actor_method_executor return method(__ray_actor, *args, **kwargs) File “c:\DEC\am\.venv\lib\site-packages\ray\util\tracing\tracing_helper.py”, line 463, in _resume_span return method(self, *_args, **_kwargs) File “c:\DEC\am\.venv\lib\site-packages\ray\rllib\env\multi_agent_env_runner.py”, line 107, in _init_ self.make_env() File “c:\DEC\am\.venv\lib\site-packages\ray\util\tracing\tracing_helper.py”, line 463, in _resume_span return method(self, *_args, **_kwargs) File “c:\DEC\am\.venv\lib\site-packages\ray\rllib\env\multi_agent_env_runner.py”, line 795, in make_env self.env = make_vec( File “c:\DEC\am\.venv\lib\site-packages\ray\rllib\env\vector\registration.py”, line 69, in make_vec env = SyncVectorMultiAgentEnv( File “c:\DEC\am\.venv\lib\site-packages\ray\rllib\env\vector\sync_vector_multi_agent_env.py”, line 37, in _init_ self.single_action_spaces = self.envs[0].unwrapped.action_spaces or dict( AttributeError: ‘CartPoleEnv’ object has no attribute ‘action_spaces’), taking actor 1 out of service. 2025-09-18 10:00:23,714 ERROR actor_manager.py:873 – Ray error (The actor died because of an error raised in its creation task, ray::MultiAgentEnvRunner._init_() (pid=46112, ip=127.0.0.1, actor_id=172503951d93979676e46d9901000000, repr=<ray.rllib.env.multi_agent_env_runner.MultiAgentEnvRunner object at 0x000001E90DC78280>) File “python\\ray\\_raylet.pyx”, line 1895, in ray._raylet.execute_task File “python\\ray\\_raylet.pyx”, line 1835, in ray._raylet.execute_task.function_executor File “c:\DEC\am\.venv\lib\site-packages\ray\_private\function_manager.py”, line 689, in actor_method_executor return method(__ray_actor, *args, **kwargs) File “c:\DEC\am\.venv\lib\site-packages\ray\util\tracing\tracing_helper.py”, line 463, in _resume_span return method(self, *_args, **_kwargs)

env = SyncVectorMultiAgentEnv(   File "c:\\DEC\\am\\.venv\\lib\\site-packages\\ray\\rllib\\env\\vector\\sync_vector_multi_agent_env.py", line 37, in \__init_\_     self.single_action_spaces = self.envs\[0\].unwrapped.action_spaces or dict( AttributeError: 'CartPoleEnv' object has no attribute 'action_spaces' 

I looked at the source code of SyncVectorMultiAgentEnv and line 37 probably needs to be fixed, or a include a warning that environment that is passed in (which is CartPole here) is a single-agent environment.