How to wrap observations for `MultiAgentEnv` or `pettingZooEnv`?

Hello, I have a question. So for simple gym.Env or SingleAgentEnv it’s known that inheriting gym.core.ObserwationWrapper is enough.

  • How would it work if we down grade our MultiAgentEnv class to something base from gym?
  • I don’t find any info about collating samples and at what place do I need to think about moving tensors from numpy(obs) to torch.Tensor?
  • Do we always need to specify build_agent_spaces like at ray/ at master · ray-project/ray · GitHub?