Best way to extract 2D observation in model forward pass, then flatten the remaining observations?

Hey all. I’m dealing with an issue where the observation space going into the forward pass of my model contains an image as well as standard 1D vectors. This means the unflattened dictionary has a 2D observation, as well as a whole bunch of 1D observations.

My goal is to pass 2D observation into one part of my model, and the remaining 1D observations into a separate part of my model, which means I then need to take those unflattened 1D obs dictionaries and flatten them out. Is there a more efficient way of doing this flattening process than looping over all of the observations one by one, accessing their values, and appending them to a tensor? Thanks in advance!

EDIT: I’m using PyTorch for my custom model.

Hi!

How does your observation look?
If your observations looks similar to the following:

[[a1, a2, …, bn], #2D Component
[[b1, b2, …, bn], #2D Component
[…]
[x1, x2, …, xn], #2D Component
[y1, y2, …, ym] #1D Component
[z1, z2, …, zo]] #1D Component

… then you can always use torch.flatten() on whatever part of your inputs you want to feed into a dense layer. That way you do not need to loop.

The modelv2 interface gives you a method to recover the original observation from a flattened observation. So if your original observation has the correct dimensions, maybe this also helps.

Does this solve your problem?