How to specify max_calls for functional API

I tried this: train_func, config={...} )

def train():
def train_func(config): 

but it always crashes with “SystemExit was raised from the worker”.
Can someone shed light on what could be the reason? Thanks,

You shouldn’t set this for; Ray Tune doesn’t expect users to use the .remote interface.

Then how can I deal with the problem of “GPU memory not released by previous worker” that max_calls=1 is designed to address? In my current tune experiment where each trial uses 1 GPU, half of the trials always end up with CUDA Out of Memory error. I suspect it is because these trials were started too soon, while the memory claimed by previous worker was not released.

Maybe try using Training (tune.Trainable, — Ray v2.0.0.dev0 ?