How to use ray.tune.function_runner.StatusReporter with tune.with_parameters?

I am using the Function API with ray.tune. My function API is roughly as follows:

def train_func(config, reporter, arch_cfg, checkpoint_dir=None):
       train code

where arch_cfg is an extra parameter I defined and passed in. I tried to use

tune.run(
     tune.with_parameters(train_func, arch_cfg=arch_cfg),

However I get an error message stating that:
TypeError: train_func() missing 1 required positional argument: 'reporter' 2022-06-06 21:50:44,341 ERROR trial_runner.py:876 -- Trial train_func_1338d_00000: Error processing event.

specifically from:
(train_func pid=10304) Traceback (most recent call last): (train_func pid=10304) File "C:\Users\Steven\anaconda3\envs\GRF_hip_outcomes\lib\threading.py", line 932, in _bootstrap_inner (train_func pid=10304) self.run() (train_func pid=10304) File "C:\Users\Steven\anaconda3\envs\GRF_hip_outcomes\lib\site-packages\ray\tune\function_runner.py", line 298, in run (train_func pid=10304) raise e (train_func pid=10304) File "C:\Users\Steven\anaconda3\envs\GRF_hip_outcomes\lib\site-packages\ray\tune\function_runner.py", line 272, in run (train_func pid=10304) self._entrypoint() (train_func pid=10304) File "C:\Users\Steven\anaconda3\envs\GRF_hip_outcomes\lib\site-packages\ray\tune\function_runner.py", line 348, in entrypoint (train_func pid=10304) return self._trainable_func( (train_func pid=10304) File "C:\Users\Steven\anaconda3\envs\GRF_hip_outcomes\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span (train_func pid=10304) return method(self, *_args, **_kwargs) (train_func pid=10304) File "C:\Users\Steven\anaconda3\envs\GRF_hip_outcomes\lib\site-packages\ray\tune\function_runner.py", line 640, in _trainable_func (train_func pid=10304) output = fn() (train_func pid=10304) File "C:\Users\Steven\anaconda3\envs\GRF_hip_outcomes\lib\site-packages\ray\tune\utils\trainable.py", line 381, in inner (train_func pid=10304) trainable(config, **fn_kwargs)

Hence was wondering what is the proper way to use the statusreporter?

Hey @stephano41, what version of Ray are you using?

I would recommend not using reporter like you have currently. Instead, you can remove the reporter arg from train_func and directly call tune.report(...) inside your train_func to report metrics to Tune. This should work correctly with tune.with_parameters(...).

For example, like here: Training (tune.Trainable, tune.report) — Ray 1.12.1

1 Like

I am using Ray 1.12.1
I am already using tune.report for metrics, however I was wondering how I would report other information about the trial, such as number of parameters the model has or the architecture of the model.

The context is that I want this information to not only show up on the command line but also saved to a file, which I have managed to do with information in the main program with the logging module. However, when I try to retrieve the same logger in train_func defined in the main program with log to file and log to console functionality, it no longer works in train_func. Specifically, the logging level automatically becomes warning and so no debug or info logging goes through (I can’t seem to change the logging level inside train_func either), and the log to file function ceases to work. This is also complicated by me using hydra to setup logger, and there isn’t any documentation so far as to how to manually re-set up a logger using the same configs

I managed to hijack the CLIReporter class to use the specific logger to log information to console and save to file with:

class Reporter(CLIReporter):
    """
    wrapper class around CLIReporter to incorporate python logging module
    """
    def __init__(self, logger, *args, **kwargs):
        self.logger=logger
        super(Reporter, self).__init__(*args, **kwargs)

    def report(self, trials, done: bool, *sys_info):
        logger.info(self._progress_str(trials, done, *sys_info))

Hence was wondering whether I could perform something similar using the status_reporter function, since I couldn’t get logger to work.

Ah thanks for the detailed explanation @stephano41!

So you can still use your custom reporter and pass that in as the progress_reporter in tune.run. But inside your Tune function you would still do tune.report(...) instead of status_reporter, and this will use your custom reporter that you passed in.

So like this for example

from ray import tune
from ray.tune.progress_reporter import CLIReporter

def train_func(config, my_arg):
	for _ in range(3):
		tune.report(x=my_arg)


class MyProgressReporter(CLIReporter):
	def report(self, *args, **kwargs):
		print("hello")
		super().report(*args, **kwargs)


tune.run(tune.with_parameters(train_func, my_arg=1), progress_reporter=MyProgressReporter())

Btw if all you need to do is have the intermediate report results saved to files, this is automatically done by Tune. Inside each trial result directory, there is a progress.csv file containing the progress for that trial.

1 Like