Example of multi-agent environment

How severe does this issue affect your experience of using Ray?

  • Medium: It contributes to significant difficulty to complete my task, but I can work around it.

I’m troubleshooting my RLlib experiment. I can’t figure out a few things about how my MultiAgentEnv subclass should be defined, specifically the “preferred format” for the observation space.

Is there anywhere an up-to-date example of using MultiAgentEnv? The example that I have (from Sven’s talk) already shows a bunch of deprecation warnings because the way to define these environments has changed.

Thanks for your help,
Ram Rachum.

Hi @cool-RR ,

Here we have an example file for this!


Thanks Arturn. But I’m confused about a couple of things here.

  1. How do I run it? I imported it and nothing happens.
  2. Is the observation space in this example really configured in RLlib’s “preferred format”? Because I’m just seeing self.observation_space = gym.spaces.Discrete(10) rather than a separate observation space for each agent.


Hi @cool-RR,

  1. You do not run this. These are examples of multi-agent environments. If you wanted to run a multi-agent environment there are several examples here: ray/rllib/examples at 04cc762ee308facf4f805381df6f9b17df95a213 · ray-project/ray · GitHub

Perhaps start with this one: ray/multi_agent_cartpole.py at 04cc762ee308facf4f805381df6f9b17df95a213 · ray-project/ray · GitHub

  1. These environments are not in preferred format. An example that is can be found here:
    ray/multi_agent_different_spaces_for_agents.py at 04cc762ee308facf4f805381df6f9b17df95a213 · ray-project/ray · GitHub

I understand, thank you.