RLlib in conjuncton with GPU env

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

Setting: I have created a hierarchical multi-agent environment based on the python MultiAgentEnv class. The environment uses GPU resources for calling complex calculations embedded in a c++ library.
What is working: When i run the env on its own with simulated action dicts it runs fine.
Problem: When i want to train with tune the c++ library doesn’t detect my CUDA device anymore.
Question: How does ray RLLib/Tune affect the GPU availability when a cluster is started? Would it solve the problem to use an ExternalEnv?