How to specify max_calls for functional API

I tried this:

tune.run( train_func, config={...} )

@ray.remote(max_calls=1)
def train():
    do_something()
def train_func(config): 
    train.remote()

but it always crashes with “SystemExit was raised from the worker”.
Can someone shed light on what could be the reason? Thanks,

You shouldn’t set this for tune.run; Ray Tune doesn’t expect users to use the .remote interface.

Then how can I deal with the problem of “GPU memory not released by previous worker” that max_calls=1 is designed to address? In my current tune experiment where each trial uses 1 GPU, half of the trials always end up with CUDA Out of Memory error. I suspect it is because these trials were started too soon, while the memory claimed by previous worker was not released.

Maybe try using Training (tune.Trainable, tune.report) — Ray v2.0.0.dev0 ?