Population Based Training (PBT) in Ray should clone high-performing trials into low-performing ones, regardless of logging settings. However, your observation that setting logging_level='error' prevents cloning is not documented or expected in the official sources. The PBT scheduler’s core logic for exploitation and exploration is independent of logging configuration, as seen in the implementation and documentation—logging is only for informational output, not for controlling trial cloning or mutation behavior (PopulationBasedTraining docs, pbt.py source).
If PBT is not cloning when logging is set to ‘error’, this is likely a bug or unintended side effect, not an intended feature. There is no mention in the official documentation or code that logging level should affect PBT’s core functionality. Would you like a step-by-step breakdown of the relevant code paths or suggestions for debugging this issue?