Transfer Learning while changing the last layer of the model

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

I want to use transfer learning to train a model, where I am training a model on one task and use its checkpoint to load the model and modify the last layer according to possible actions of another task. How can I do that?
I saw the following post where weights of a particular layer is replaced with other values but In my case I want to change the shape of the last layer as well.
https://github.com/ray-project/ray/issues/5620

Any help will be highly appreciated.

@Siddharth_Jain, what kind of model do you have? A plain Keras model or a ModelV2 RLlib model? Also, what is the algorithm you are using? PPO, DQN, …?

Generally transfer learning would work as usual regarding the model itself. You cut off the last layer and replace it by your own (see for example here)

If your second task does not have the same actions as the first one transfer learning is tricky. Consider for example DQN. Your Deep Q-Network has learned the values of the different state action pairs (at least relative to each other). Now you replace the last layer with a different one having a different number of actions. The Q-Values will be different, even the transition probabilities of your MDP will differ, if your new actions are not null sets. So there is not a high chance that the Q-Network will work seemlessly with a different action space.