Sharing an LSTM cell between policies

Hello everybody,

what do you think, is it reasonable to share an LSTM cell of a neural network model between policies?
More precisely, I have different input and output layer, but all the layers in between (including LSTM) I want to share between the various policies.

I guess it might be reasonable since the policies share the weights of an LSTM cell but all policies have theirs own cell and hidden states.

Is my point of view okay?

PS: The way I do this is equivalent to the example showed in shared_weights_model.py

This is what people usually do for multi-modal/multi task learning, e.g. learning between different tasks, not just perturbation of the same env, but actually different tasks. It sounds good.

1 Like

Hi klausk55,

You should just try it and let us know how it went in this thread!
Schulman came up with some helpful slides for avoiding basic mistakes in evaluating an algorithm. So if you do your research and feel like you came up with something rather original and would like to test it, maybe consider his slides :slight_smile:
Good luck!

1 Like