Why is `OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore` used in cluster commands?

Does someone know why this is being introduced?

Two concerns:

  1. Constraining OMP_NUM_THREADS may hurt executed commands, if they expect to have more threads? (perhaps ray already resets it according to remote(num_cpus)?)
  2. Scrubbing PYTHONWARNINGS=ignore is nice to denoise, but may hide important stuff.

(Can file as issue, but posting here b/c I don’t understand background)

Due Diligence, I guess?

Was spinning up cluster, saw these commands, traced it to code:

Tried to trace it out:

$ git log -S'export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore' --oneline
567009d5f [Autoscaler] Fix k8s command runner when command fails (#10966)
169c3a46d [k8s] Broken Command Interactivity (#10297)
0b5d5ec17 [Autoscaler] Pass custom resources to "ray start" multi instance autoscaling  (#9986)
09b9b81ea [autoscaler] Move command runners into separate file and clean up interface. (#9340)
37053443b Restore set omp (#7051)
8b4b49662 Force OMP_NUM_THREADS=1 if unset (#6998)
9473da69b [autoscaler] Experimental support for local / on-prem clusters (#2678)

Looking at https://github.com/ray-project/ray/pull/2678

Don’t see much there about why it’s there.

Hey @ericl, can you please help with this one?

These are making frameworks like torch and tensorflow much more usable by default in Ray. Without limiting OMP_NUM_THREADS by default, torch will spawn many threads per worker, leading to thrashing and lower performance. Without PYTHON WARNINGS disabled, the output of a typical Ray program becomes unreadable. You can feel free to experiment with disabling these settings if the default is not working well for your situation.