How to implement rllib on a multi-node GPU cluster if I am using the open source ray framework

How severe does this issue affect your experience of using Ray?

  • None: Just asking a question out of curiosity

I see that “The following table is an overview of all available algorithms in RLlib. Note that all of them support multi-GPU training on a single (GPU) node in Ray (open-source)as well as multi-GPU training on multi-node (GPU) clusters when using the Anyscale platform” in rllib documentation.

Can I use the open source ray to implement multi-node multi-GPU running of rllib? What are the difficulties?