Atari Preprocessors for Atari-like environments

Hi,

From my experience with RLlib, the way RLlib default keys are configured are suited for MLP models with fixed number of layers which can be configured by config[“hiddens”]. However, I am yet to find an example tuned for atari like environments (single agent, multi agent). An example to reproduce the usual DQN architecture using RLlib pre-processors would be a great help for RLlib users.

I am struggling to understand how the built-in pre-processors for Atari can be applied. From the docs, there are 2 options of whether to use the preprocessor_pref=deepmind or set it to rllib, which allow down scaling to dim x dim , where dim is a model config key, grayscale=True for reducing the color channel to 1, or zero_mean=True for producing -1.0 to 1.0 values.

However, even when I am using a multi-agent version of Atari (PettingZoo), and with default set to preprocessor_pref=deepmind, the pre-processor was not applied while using RLlib DQN. Upon diving into the code, I came to know that this wrapper does not support MultiAgentEnvs (code here). I would be curious to know why this pre-processor cannot be applied to a MultiAgentEnv. AFAIK, these standard wrappers (resize, frame_stack, frame_skip) should work the same way for single agent and multi-agent environments, but I can be wrong here.

Can anyone clarify what exactly the is_atari(env)defined here is doing?
By seeing the usage of is_atari() here and its definition here, I think it may be able to accommodate multi-agent Atari environments from PettingZoo.

Looking forward to hearing on this from the RLlib team and community. Thanks