Can tune.run be run without blocking?

I’m interested in a recipe to execute tune.run in a deferred manner, i.e. so that it doesn’t block, and ideally so that intermediate results can be accessed while its running, is this possible?

I tried doing this by wrapping the call in a function decorated with @ray.remote, but that seems to confuse some bit of the parallelism pipeline - one process hogs most of the memory and number of parallel executors is down at 2 from 12 which is what I get when I run it directly. Is this the right way to go about things?

The source code also suggests that TrialRunner is a local instatiated inside run(), which seems to be the only way to access existing trials. Apart from looking at (tensorboard) logs, is there a way to access the results directly, e.g. in a notebook, while Ray Tune is running?

Hi @daikts, there is no general support for running asynchronously, but theoretically you should be able to make it happen.

I’m not sure why your processes are clogged, theoretically it should work out of the box. Can you share this part of your code?

Instead of using ray.remote for running tune you could try running it in a thread. For accessing intermediate results, have a look at our callback interface. You’ll probably have to implement some communication structure to push the callback results back to some event handler.