Thank you @kai and @Lars_Simon_Zehnder for clarifying terms and the insightful messages! I tested all the suggested settings and these are the results I got:
Running same experiment:
ray.init(num_cpus=5)
num_samples=1000
trainable_func=problem_rosenbrock()
Test 1 (fastest)
Setting session.report({"mean_loss": z, "done": True})
within trainable
os.environ["TUNE_DISABLE_AUTO_CALLBACK_LOGGERS"] = "1"
reuse_actors=True
log_to_file=False
Time : 22 secs
Test 2
Re-using actors and setting env var
os.environ["TUNE_DISABLE_AUTO_CALLBACK_LOGGERS"] = "1"
reuse_actors=True
log_to_file=False
Time : 28 seconds
Test 3
Logging to file stdout
and stderr
os.environ["TUNE_DISABLE_AUTO_CALLBACK_LOGGERS"] = "1"
reuse_actors=True
log_to_file=True
Time : 32 seconds
Test 4
Not setting env var but re-using actors
reuse_actors=True
log_to_file=False
Time : 42 secs
Test 5 (slowest)
Not re-using actors but setting env var
os.environ["TUNE_DISABLE_AUTO_CALLBACK_LOGGERS"] = "1"
reuse_actors=False
log_to_file=False
Time : ~10 mins
I realize I would not get the same speed as running native Optuna but happy to see there are settings I can tweak to improve performance.
Questions
- @kai , is there a way to disable logging all completely ? Similar to what this user asked.
- Is there a way to customize the trial name and dirname? I see there is a trial_dirname_creator parameter that seems to work within tune.run() call but have not seen an example within
tune.Tuner
api . Are there some examples on how to passtrial_name_creator
andtrial_dirname_creator
for currentTuner
api?
Again, thanks for all the responses ! This has been very helpful