Ray actor error: env.observation_space.contains(dummy_obs)

I got this error but unluckily can not trace back to my code. What can be the possible reasons for this failure? Thanks!

My observation space initialization is as follows:

def init_obs_space(self):
        observation_space_n = []
        for i in range(self.num_agents):
            lb = np.zeros(self.num_vertices * 2)
            ub = np.concatenate((
                np.ones(self.num_vertices), 
                np.full(self.num_vertices, self.num_bins)
            ))
            if i >= self.num_adversaries:
                lb = np.concatenate((
                    lb, np.zeros(self.num_vertices + 1)
                ))
                ub = np.concatenate((
                    ub, np.full(self.num_vertices, self.num_bins)
                ))
                ub = np.concatenate((
                    ub, np.full(1, self.def_budget)
                ))
            observation_space_n.append(
                spaces.Box(lb, ub)  # TODO: update this to be upper bounded by the number of adversaries
            )            
        return observation_space_n

Hi @Yinuo_Du,

It is because you are not returning a gym space. You are returning a list but you need to return a gym space type.

1 Like

Thanks! But that’s necessary for me since I need a multi-agent environment. The goal is to have one observation space for each agent. How else should I do this in RLlib? Any suggestions would be helpful.

You should use the Dict gym space.

1 Like

This example might be helpful to you.