Alternatives for tmp directory

Hello,
I’m trying to tune parameters in a shared cluster. Is there a way to change the path from tmp directory to something else as I don’t have much space left? Here is the error

srun: error: c0073: task 0: Exited with exit code 1
/home/tmamidi/.conda/envs/training/lib/python3.8/site-packages/ray/autoscaler/_private/cli_logger.py:57: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
  warnings.warn(
Traceback (most recent call last):
  File "Tuning/SGD.py", line 132, in <module>
    ray.init(ignore_reinit_error=True)
  File "/home/tmamidi/.conda/envs/training/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 47, in wrapper
    return func(*args, **kwargs)
  File "/home/tmamidi/.conda/envs/training/lib/python3.8/site-packages/ray/worker.py", line 699, in init
    _global_node = ray.node.Node(
  File "/home/tmamidi/.conda/envs/training/lib/python3.8/site-packages/ray/node.py", line 167, in __init__
    self._init_temp(redis_client)
  File "/home/tmamidi/.conda/envs/training/lib/python3.8/site-packages/ray/node.py", line 270, in _init_temp
    try_to_create_directory(self._temp_dir)
  File "/home/tmamidi/.conda/envs/training/lib/python3.8/site-packages/ray/_private/utils.py", line 796, in try_to_create_directory
    os.makedirs(directory_path, exist_ok=True)
  File "/home/tmamidi/.conda/envs/training/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
OSError: [Errno 28] No space left on device: '/tmp/ray'

Here is the snippet that I’m using for tuning

clf = TuneSearchCV(model,
                    param_distributions=config,
                    n_trials=500,
                    early_stopping=False,
                    max_iters=1,   
                    search_optimization="bayesian",
                    n_jobs=50,
                    refit=True,
                    cv= StratifiedKFold(n_splits=5,shuffle=True,random_state=42),
                    verbose=0,
                    #loggers = "tensorboard",
                    random_state=42,
                    local_dir="./ray_results",
                    )
        clf.fit(X_train, Y_train)

I found this for logging - Configuring Ray — Ray v2.0.0.dev0

This option might work currently cause I’m starting ray with head and worker nodes while submitting slurm jobs.