Confused about "adjust_nstep"

Hi, I’m confused about the implementation in multi step reward(adjust_nstep):

Here, len_ and n_step is euqal. so except for the first trajectory in the batch, the multi-step returns for the remaining trajectories would be smaller than nstep. Is my understanding correct ? Is this implementaion reasonable?