ray.rllib.utils.error.UnsupportedSpaceException

algo = PPOConfig()
.environment(env = EconomicsWrapper,env_config={“env” : env}, disable_env_checking = True)
.multi_agent(
policies={
“policy_0”: (
None, EconomicsWrapper.get_action_space( 0, env), EconomicsWrapper.get_observation_space(‘0’, obs), {“gamma”: 0.80}
),
“policy_1”: (
None, EconomicsWrapper.get_action_space( 1, env), EconomicsWrapper.get_observation_space(‘1’, obs), {“gamma”: 0.80}
),
“policy_2”: (
None, EconomicsWrapper.get_action_space( 2, env), EconomicsWrapper.get_observation_space(‘2’, obs), {“gamma”: 0.80}
),
“policy_3”: (
None, EconomicsWrapper.get_action_space( 3, env), EconomicsWrapper.get_observation_space(‘3’, obs), {“gamma”: 0.80}
),
“policy_p”: (
None, EconomicsWrapper.get_action_space( ‘p’, env), EconomicsWrapper.get_observation_space(‘p’, obs), {“gamma”: 0.80}
)
},
policy_mapping_fn = lambda agent_id: f"policy_{agent_id}"
).build()

can anyone help me to solve this error

ray.rllib.utils.error.UnsupportedSpaceException: Action space has multiple dimensions (7, 11, 11). Consider reshaping this into a single dimension, using a custom action distribution, using a Tuple action space, or the multi-agent API.