Trouble implementing a co-attention transformer in RLlib

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

I am trying to implement a co-attention layer for two tensors before passing the outputs to another encoder layer and I am having some issues setting up configs