Hi! I’m currently trying to use RLlib to train on a custom multi-agent car racing environment (essentially a multi-agent version of Gym CarRacing-v0). In my previous workflow without RLlib, I was pre-training a model on the single-agent CarRacing-v0 environment before fine-tuning in the multi-agent environment. Is this something that’s possible with RLlib? For instance, if I have four policies in my multi-agent environment (one for each of four agents), would I be able to save model weights from the single-agent environment and load these weights into the four multi-agent policies? I’d like each policy in the multi-agent environment to start out with the same pre-trained weights.
Thanks so much!