Multiple hierarchical agents possible?

Hey everyone,

I’ve been playing around with the multi-agent environments and trying to figure out how everything is connected and if my idea of an environment structure is possible.

I would like to implement an environment with multiple (100+ count) entities of different hierarchical agents (5-10 classes) with individual sub-policies per class (3-4 per agent, around 10-20 different ones for all of them). So far i have seen only the windy maze example for hierarchical agents and it implements only a single master-agent.

My first question is if something like this would even be possible to implement, both in terms of architecture(multiple hierarchical agents) and number of agents and policies.

If so, would it be possible to train such a construct partially/only for some agent classes while the other agents just evaluate their policies. The hierarchical agents docs suggest this should be possible with"policies_to_train": ["top_level"].

Last but not least if there is anyone here that has experience with this kind of things and would be wiling to assist(payed) implementing this as prototype i would be glad for any hints.

Cheers and thanks in advance for any help!

1 Like

Hey @Blubberblub , yeah, I think this is totally possible. You can also take a look at the new self-play examples, which show how you can a) add new policies on the fly to a Trainer and b) change the mapping_fn AND the policies_to_train list on-the-fly.

The example is here (it doesn’t do hierarchical, but I’m guessing you could just implement that by using different policies for the different hierarchy levels):
ray.rllib.examples.self_play_league_based_with_open_spiel.py

Thanks a lot sven! I’m gonna take a look at the examples and give it a try tommorw. Is it possible to get some kind of professional support somewhere for ray/tune/rllib apart from the RLlib office hours you’re doing?