Variable observation size

How severe does this issue affect your experience of using Ray?

  • Medium: It contributes to significant difficulty to complete my task, but I can work around it.

Hello !
Is there a recommended way to deal with variable observation size, other than padding the observations to a fixed size ?

From the page Variable length, complex observation space, I tried to use a Repeated space. but observations are padded under the hood before being stored in the train batch (I am using PPO). The problem in my case is that observation size can possibly vary a lot, so the max_len of the repeated space is usually way bigger than the actual observation size: a lot of useless padding is stored, to the point where I am running out of memory.

Is there any solution for this problem ? What are the alternatives ?

There was a similar discussion here, but at the time it seems that there was no solution.

1 Like

In your case, I’d consider setting compress_observations to True.

If you’re using the new config api, then under the .rollouts function of your config builder, you can set compress_observations=True.

If you’re using the old config dictionaries then you can add "compress_observations": True.

This will compress the observations until they need to be trained on.

If that doesn’t work for your problem, then you’ll probably need to change the formulation of your problem.