Hello, I have a question. So for simple
SingleAgentEnv it’s known that inheriting
gym.core.ObserwationWrapper is enough.
- How would it work if we down grade our
MultiAgentEnvclass to something base from
- I don’t find any info about collating samples and at what place do I need to think about moving tensors from numpy(obs) to torch.Tensor?
- Do we always need to specify
build_agent_spaceslike at ray/kaggle_wrapper.py at master · ray-project/ray · GitHub?