Hi everyone, I am still plagued by this question/bug, and I wish if anyone has any insight on this one. Thanks a lot!
In short, my make_env
needs to also load a torch model for inference but have more access to a CUDA device where in fact the machine has multiple GPUs. It works fine on PPO or QMIX if run in local mode.