I’m interested in a recipe to execute tune.run
in a deferred manner, i.e. so that it doesn’t block, and ideally so that intermediate results can be accessed while its running, is this possible?
I tried doing this by wrapping the call in a function decorated with @ray.remote
, but that seems to confuse some bit of the parallelism pipeline - one process hogs most of the memory and number of parallel executors is down at 2 from 12 which is what I get when I run it directly. Is this the right way to go about things?
The source code also suggests that TrialRunner is a local instatiated inside run(), which seems to be the only way to access existing trials. Apart from looking at (tensorboard) logs, is there a way to access the results directly, e.g. in a notebook, while Ray Tune is running?