Get_initial_state for LSTM custom model without initial FC

Hi, I have just put the attention net with requirement in one file:

and then I replaced the model with LSTM only without initial fc layers.
It is in this file:

the problem I am facing is with the get_initial_state function.
I have tried many options and no one seems to be good:

def get_initial_state(self):
    # h = [
    #     torch.zeros(self.lstm_size),
    #     torch.zeros(self.lstm_size)
    # ]
    # h = [
    #     torch.zeros(1, self.lstm_size),
    #     torch.zeros(1, self.lstm_size)
    # ]
    h =
    return h

Can you please give me an hint on this?
in pure pytorch one of these methods should work but in ray it does not.
I am getting the following error:

(PPOTrainer pid=133035) RuntimeError: Expected hidden[0] size (1, 32, 16), got [1, 4, 16]

Hey @mg64ve , thanks for the question! Some problems I see in your implementation of get_initial_state:

  • The return value should always be a list of state tensors, so in your case, a list with one single item, which is the h-state-tensor (you are returning h directly w/o the list).
  • You seem to return a state tensor that has the same shape as the weight matrix, but I think you should return a state tensor that has the same shape as the bias vector.

Also, state tensors in your returned list should all be non-batched, but I think you are doing this correctly here.