As far as I know, there is still no option to do that, but you can workaround that though.
If you use the same environment, you can just load your checkpoint and only compute actions to the selected agent.
If not, you will need to load the checkpoint in the environment you trained it and then to save the selected agent’s weights, which you will be able to load later.
You can see a code for example here.
Keep in mind that you’re saving only the weights and not the entire state like in a checkpoint.