How severe does this issue affect your experience of using Ray?
- None: Just asking a question out of curiosity
Inspired by Isaac Gym and by Brax capability to obtain massive speed factor through parallelization. by running both the environment and the inference on the GPU,
I would also like to know if such a computing model is possible on RLLIB.
A few months ago I asked on Slack and someone told me this feature would require writing a GPUSampleCollectorclass derived from the current SampleCollector class,
and a GPUVectorEnv class derived from the VectorEnv, that works on the GPU and produces GPU outputs (possibly storing all data in PyTorch GPU tensors).
There was some interest, but I heard nothing in the last months.
Is anybody interested in this subject?