Hi everyone !
I think it would be really nice to be able to apply mask to view requirement directly, without having to do it in post-processing. It would be both more efficiency and more reliable since every operations is done at the same place in the code, without having to share mask
attribute to every objects having to process raw trajectory view data. I think it would be relevant because to me it is pretty common to stack only part of the observation, especially in the case of Dict
observation space before flattening.
Hey @duburcqa , could you specify in more detail, what you mean by “apply mask to view_requirement”?
Do you mean that when you have a Dict obs space, that you should be able to have sub-view requirements for the different components? It’s true, currently, you can only stack the entire (flattened) obs.
Do you mean that when you have a Dict obs space, that you should be able to have sub-view requirements for the different components?
Yes, but my request is more general: it would also be useful if the observation is a Box space. Indeed, most of the time, even if the observation space is flat from the very beginning, information is likely to be inconsistent (for instance, actually observation/sensors data + task specification). It would make sense to stack a partial view of it, in this case the sensors data, but not the task specification since it is constant during the whole episode. Just being able to specify a mask that would be applied on flattened data using the very same notations than shift
argument would be very useful.
Yeah, I totally agree, also about being able to do this for flat Box spaces.
I’ll put this on my (long :/) TODO list.
For the traj. view API, I have two remaining larger complexes:
- Multi-agent comm. channels (planned for Q2).
- Being able to define “masks” for views, e.g. “obs.a” could be the “a” key of a Dict obs or “obs.2_4” could be the 2nd to 34th slot in a simple Box 1D space (see discussion above).
This limitation has been addressed by now ?