How to disable auto-encoding?

My observation space is a spaces.Discrete .

rllib automatically one-hot encode this observation, as mentioned in the docs


How can I disable this behavior ?

The tensor my model receives has [batch, discrete_size] shape, but I want [batch] shape.

cc @sven1977 might be related to [rllib] provides a flattened observation, instead of the original, for custom_loss() · Issue #7896 · ray-project/ray · GitHub

Hey @Astariul , could you try this?

from ray.rllib.models.preprocessors import NoPreprocessor
from ray.rllib.models.catalog import ModelCatalog

ModelCatalog.register_custom_preprocessor("noprep", NoPreprocessor)

# in your config:
config:
    model:
        custom_preprocessor: "noprep"

I’m having the following warning and error :

2021-05-26 20:11:41,390 WARNING catalog.py:620 – DeprecationWarning: Custom preprocessors are deprecated, since they sometimes conflict with the built-in preprocessors for handling complex observation spaces. Please use wrapper classes around your environment instead of preprocessors.

Traceback (most recent call last):
File “train_ip.py”, line 362, in
trainer = ppo.PPOTrainer(config=config)
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py”, line 121, in init
Trainer.init(self, config, env, logger_creator)
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/rllib/agents/trainer.py”, line 516, in init
super().init(config, logger_creator)
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/tune/trainable.py”, line 98, in init
self.setup(copy.deepcopy(self.config))
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/rllib/agents/trainer.py”, line 707, in setup
self._init(self.config, self.env_creator)
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py”, line 153, in _init
num_workers=self.config[“num_workers”])
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/rllib/agents/trainer.py”, line 789, in _make_workers
logdir=self.logdir)
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/rllib/evaluation/worker_set.py”, line 98, in init
spaces=spaces,
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/rllib/evaluation/worker_set.py”, line 357, in _make_worker
spaces=spaces,
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py”, line 517, in init
policy_dict, policy_config)
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py”, line 1128, in _build_policy_map
obs_space, merged_conf.get(“model”))
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/rllib/models/catalog.py”, line 626, in get_preprocessor_for_space
observation_space, options)
File “/home/remondn/miniconda3/envs/rllib/lib/python3.6/site-packages/ray/rllib/models/preprocessors.py”, line 40, in init
self._size = int(np.product(self.shape))
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType’


My workaround for now is to simply process the tensor from one-hot encoding back to class label :

x= torch.argmax(x, dim=1)