Pytorch uses only one cpu per trial

Hello,

when setting resources_per_trial={‘cpu’: 2 ,‘gpu’: .5}, I expect that Pytorch uses 2 cpus per trial and two trials should be running at the same time, since I have on gpu available. However, two cpus are on idle.

What should I do differently?

Those are my ray commands:

ray.init(num_cpus=4 num_gpus = 1, include_dashboard=True)
analysis = tune.run(
        NN_tune_trainable,
        name=name,
        stop=tune_stop,
        config=cfg,
        num_samples=num_samples,
        local_dir=result_path,
        checkpoint_freq=checkpoint_freq,
        checkpoint_at_end=True,
        resources_per_trial={'cpu': 2 ,'gpu': .5},
        max_failures=3,
    )

Is it necessary to set torch.set_num_threads() or something like that?

cc @rliaw can you address this question? Tune related.

Hey,

I found the solution. I was too impatient. It takes some time, but then two PIDs are removed and I guess that the resources are combined on the remaining PIDs.