Hi, I am new to rllib and recently working on custom multi-agent env, not sure how MultiAgentEnv deal with observation space mapping. I found most examples expect all agents have equal observation space which just need ob_space = gym.space.
If I set observation space as a Dict like below will lead to ValueError because observation is just from a single agent while observation space is a Dict.
I am dealing with a homogeneous problem which this will not be a problem, just wondering if observation space mapping is available through MultiAgentEnv with independent learner setting.
self.observation_space = spaces.Dict({
'agent_0': spaces.Tuple(
[
spaces.Box(-np.inf, np.inf, shape=(9,), dtype=np.int64),
]),
'agent_1': spaces.Tuple(
[
spaces.Box(-np.inf, np.inf, shape=(9,), dtype=np.int64),
]),
'agent_2': spaces.Tuple(
[
spaces.Box(-np.inf, np.inf, shape=(9,), dtype=np.int64),
]),
})