How severe does this issue affect your experience of using Ray?
- High: It blocks me to complete my task.
I want to load a trained agent to use its compute_action() method. To do this, I am loading the trainer and calling trainer = tune.registry.get_trainable_cls(class_name)(config=config) with config loaded from params.pkl saved in the agent directory. However, this does several unnecessary things that I do not want to happen, such as starting local and remote workers, each one starting an environment (which is what happens during training).
How can I use the compute_action() method without all this additional overhead? Should I load the agent differently?