Ray Tune run out of disk (/tmp/ray)

Hi,

I am using ray to run grid search over 480 trials. I found it ran out of disk because 293G logs were generated in /tmp/ray in one session. Is there any suggestion to reduces the log size? Thanks!!

293G    ./session_2021-06-18_03-17-13_932164_5376

Here’s how I run tune.run:

reporter = tune.CLIReporter(metric_columns=all_monitor_metrics)
analysis = tune.run(
        tune.with_parameters(Trainable, data=data),
        search_alg=init_search_algorithm(search_alg, metric=model_config.val_metric, mode=args.mode),
        local_dir=args.local_dir,
        metric=val_metric,
        mode=args.mode,
        num_samples=args.num_samples,
        resources_per_trial={
            'cpu': args.cpu_count, 'gpu': args.gpu_count},
        progress_reporter=reporter,
        config=model_config)

Hey @Eleven1Liu what version of Ray are you on?

Can you try the latest (1.4.1)?

1 Like

@rliaw I use 1.4.0.
Sure. Thanks!!

@rliaw 1.4.1 works! Thanks a lot :grinning_face_with_smiling_eyes: