How to set #of cpu and gpu per trial?

I’m trying some really basic code:

from ray import tune
from ray.rllib.agents.ppo import PPOTrainer
tune.run(PPOTrainer, config={"env": "CartPole-v0"})

But it’s always pending with this message:

2021-11-02 14:19:26,189 WARNING trial_executor.py:306 – Ignore this message if the cluster is autoscaling. You asked for 3.0 cpu and 0.0 gpu per trial, but the cluster only has 2.0 cpu and 1.0 gpu. Stop the tuning job and adjust the resources requested per trial (possibly via resources_per_trial or via num_workers for rllib) and/or add more resources to your Ray runtime.

Then I add this code:

PPOTrainer.default_resource_request({'cpu':1, "gpu":1})

with the intent to request less cpu to make it run. But nothing happened. It still needs “3.0 cpu and 0.0 gpu per trial”.

Then I try to use the parameter resources_per_trial from tune.run then I get an error:

ValueError: Resources for <class 'ray.rllib.agents.trainer_template.PPO'> have been automatically set to <ray.tune.utils.placement_groups.PlacementGroupFactory object at 0x7f1f64833290> by its `default_resource_request()` method. Please clear the `resources_per_trial` option.

Now what do I do?

Hi @JingZhang918 ,

and welcome to the ray community. You have to set the configurations num_gpus, num_cpus_per_worker and num_cpus_per_driver. num_gpus_per_worker is usually only needed, if your environment needs one. See for more infos into the Trainer class.

Hope that helps you get your code to run.