Agents sharing the environment for efficiency

I have an world where multiple agents can live. The agents are identical, so I need to train a single policy to guide them. Instead of creating 10 worlds with a single agent I can create just one world with 10 agents and enjoy 10x computation savings. They will all step in parallel, they are the same agents, so I can collect 10x more samples this way.

Now I’m trying to plug this into RLlib… How can I define an environment that can step through multiple agents withing the same world? I’ve been looking at VectorEnv, but it will wrap N distinct environments into single vectorized one…

Hi @akhodakivskiy ,

what is with the MultiAgentEnv?

In RLlib, you can use the MultiAgentEnv that Lars suggested. Here’s a framework that can help get you started.

A word of caution: multiple agents interacting in the same environment is NOT the same thing as multiple environments with a single agent, even if the policy is the same. In the single agent case, you only have one agent whose actions change the environment, whereas in the multiagent case, each agent’s actions change the environment, so each agent will experience the environment that is influenced by the actions of other agents. This can have a huge impact on the resulting trained policy.

@Lars_Simon_Zehnder: Great tip, thanks! I ended up implementing MultiAgentEnv. Works great!

@rusu24edward: My agents only share resources in the “world” and are independent otherwise. Their behavior in shared world is identical to that in 10 worlds with a single agent. I’m looking at your link now. Thanks!

1 Like