MARL mapping policy examples not working

How severe does this issue affect your experience of using Ray?*

  • High

I am searching for a MARL example where 5 agents share the same policy and the same action and observation space. However, both are made of tuples.

Like this:

  act_space = Tuple([Discrete(10), Discrete(2)]) 
  obs_space = Tuple([Discrete(10), Discrete(2)]) 
  self.action_space = {agent: act_space for agent in range(self.num_agents)}
  self.observation_space = {agent: obs_space for agent in range(self.num_agents)}

I am getting the following error:

raise ValueError(
ValueError: `observation_space` not provided in PolicySpec for default_policy and env does not have an observation space OR no spaces received from other workers' env(s) OR no `observation_space` specified in config!

Then, I’ve tried to run the official example scripts and got the same error:

when I try to run this example:

I get the following error:

AttributeError: 'AsymCoinGame' object has no attribute 'observation_space'

I have also tried to run this script getting the same error:

after fixing for typos: Line 61- an extra {} and Line 72 - misspeling of ‘truncated’

Hi @Username1 ,

I have opened a fixing PR: [RLlib] Make coingame a tests again by ArturNiederfahrenhorst · Pull Request #33156 · ray-project/ray · GitHub
You can see the “solution” there or run on master soon to retrieve the correct code.

1 Like

Thank you very much !