Impala Deep Residual (Custom) Model

How severe does this issue affect your experience of using Ray?

  • Low: It annoys or frustrates me for a moment.

Ray: 2.1.0
TensorFlow: 2.10.0

Hi

I’m trying to replicate the Impala Deep Residual Model from the paper but without the embedding part - right-hand side. See Image

image

So far I’ve managed to forward the residual CNN part into the RNN and get that working in RandomEnv.

Code can be found here:
Impala_Deep_Residual_Model

However, like in the paper I would also like to include the previous action and reward into the LSTM block. I’ve had a close look into this part of the documentation (ViewRequirement) but appears to be unable to access the input_dict in the forward_rnn(…)

Can anyone help?

BR

Jorgen

Hi @Jorgen_Svane,

The easiest way to do this in rllib will be to do all the layers up through the last relu in you custom models forward method. then concatenate that with r_t-a and a_t-1. Store that in input_dict[“obs_flat”]. Then call self.forward_rnn(…).

Hi @mannyv

Thanks for your replay. For now I opted for setting config = {… "use_lstm:True, “lstm_use_prev_action”: True, “lstm_use_prev_reward”: True, …} and got it working here.

I’ll probably revert to your solution later to get better control over the custom model.

BR

Jorgen