How severe does this issue affect your experience of using Ray?
- High: It blocks me to complete my task.
I have trained PPO on a machine with multiple GPUs and saved it. Now, I’ve to use it for inference on my own CPU-only machine for which, apparently I have to change the configurations of my trained model before making the inference. I have used following approach to modify the configurations, however, still can’t use the model for inference.
original_ppo = Algorithm.from_checkpoint()
ppo_configurations = original_ppo.config.to_dict()
ppo_configurations["num_env_runners"] = 1
ppo_configurations["num_cpus_per_env_runner"] = 1
ppo_configurations["num_gpus_per_env_runner"] = 0
ppo_configurations["num_gpus"] = 0
ppo_configurations["explore"] = False
updated_config = PPOConfig().from_dict(ppo_configurations)
new_ppo = updated_config.build()
# Step 3: Reinitialize the algorithm with the updated configuration
new_ppo.restore(os.path.abspath())
Following is the warning I get when executed, and the code stays halts there:
The following resource request cannot be scheduled right now: {'CPU': 6.0, 'GPU': 0.25}
I had trained the model with 6 CPUs and 0.25 GPU for each env_runner. So, I need to confirm if there is something wrong with my above-mentioned configurations or the way I’m modifying those?