Loading pre-trained single-agent policy weights for multi-agent training

Hi! I’m currently trying to use RLlib to train on a custom multi-agent car racing environment (essentially a multi-agent version of Gym CarRacing-v0). In my previous workflow without RLlib, I was pre-training a model on the single-agent CarRacing-v0 environment before fine-tuning in the multi-agent environment. Is this something that’s possible with RLlib? For instance, if I have four policies in my multi-agent environment (one for each of four agents), would I be able to save model weights from the single-agent environment and load these weights into the four multi-agent policies? I’d like each policy in the multi-agent environment to start out with the same pre-trained weights.

Thanks so much!

Hey @jgonik , great question. We should add an example script to RLlib that shows how to do that.

You can basically do a pre-run using the BCTrainer (ray.rllib.agents.marwil.bc.py). The test case in ray.rllib.agents.marwil.tests.test_bc.py shows how to train from an offline file.
After training your BCTrainer, you save the policies weights by doing:

trainer = BCTrainer(...)
... #<- training
weights = trainer.get_policy().get_weights() # <- single agent weights

# Create the actual trainer and load the BC trained weights into it.
new_trainer = PPOTrainer(...)
for n in range(4):
    policy = new_trainer.get_policy([the nth policy ID])
    policy.set_weights(weights)
3 Likes

Awesome, thanks so much! I’ll give that a try :slight_smile: