Getting serialization error when using Ray Tune

I a new to ray.tune and I am trying to use it to tune two hyperparameters: learning_rate and weight decay.
I get the following error message:
After the error message, I share my code as well.

================================================================================
Checking Serializability of <class 'ray.tune.trainable.function_trainable.wrap_function.<locals>.ImplicitFunc'>
================================================================================
!!! FAIL serialization: cannot pickle 'Event' object
    Serializing '__init__' <function Trainable.__init__ at 0x7fbe1a9ed550>...
    Serializing '__repr__' <function wrap_function.<locals>.ImplicitFunc.__repr__ at 0x7fbdf1d94ee0>...
    Serializing '_close_logfiles' <function Trainable._close_logfiles at 0x7fbe1a9f13a0>...
    Serializing '_create_checkpoint_dir' <function FunctionTrainable._create_checkpoint_dir at 0x7fbe1a9878b0>...
    Serializing '_create_logger' <function Trainable._create_logger at 0x7fbe1a9f1280>...
    Serializing '_export_model' <function Trainable._export_model at 0x7fbe1a9f1c10>...
    Serializing '_implements_method' <function Trainable._implements_method at 0x7fbe1a9f1ca0>...
    Serializing '_maybe_load_from_cloud' <function Trainable._maybe_load_from_cloud at 0x7fbe1a9edd30>...
    Serializing '_maybe_save_to_cloud' <function Trainable._maybe_save_to_cloud at 0x7fbe1a9edca0>...
    Serializing '_open_logfiles' <function Trainable._open_logfiles at 0x7fbe1a9f1310>...
    Serializing '_report_thread_runner_error' <function FunctionTrainable._report_thread_runner_error at 0x7fbe1a987ca0>...
    Serializing '_restore_from_checkpoint_obj' <function FunctionTrainable._restore_from_checkpoint_obj at 0x7fbe1a987a60>...
    Serializing '_start' <function FunctionTrainable._start at 0x7fbe1a9875e0>...
    Serializing '_storage_path' <function Trainable._storage_path at 0x7fbe1a9ed670>...
    Serializing '_trainable_func' <function wrap_function.<locals>.ImplicitFunc._trainable_func at 0x7fbdf1da31f0>...
    !!! FAIL serialization: cannot pickle 'Event' object
    Detected 3 global variables. Checking serializability...
        Serializing 'partial' <class 'functools.partial'>...
        Serializing 'inspect' <module 'inspect' from '/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/inspect.py'>...
        Serializing 'RESULT_DUPLICATE' __duplicate__...
    Detected 3 nonlocal variables. Checking serializability...
        Serializing 'train_func' <function with_parameters.<locals>._inner at 0x7fbe100bd3a0>...
        !!! FAIL serialization: cannot pickle 'Event' object
        Detected 1 nonlocal variables. Checking serializability...
            Serializing 'inner' <function with_parameters.<locals>.inner at 0x7fbe100bd280>...
            !!! FAIL serialization: cannot pickle 'Event' object
    Serializing '_annotated' FunctionTrainable...
================================================================================
Variable: 

        FailTuple(inner [obj=<function with_parameters.<locals>.inner at 0x7fbe100bd280>, parent=<function with_parameters.<locals>._inner at 0x7fbe100bd3a0>])

was found to be non-serializable. There may be multiple other undetected variables that were non-serializable. 
Consider either removing the instantiation/imports of these variables or moving the instantiation into the scope of the function/class. 
If you have any suggestions on how to improve this error message, please reach out to the Ray developers on github.com/ray-project/ray/issues/
================================================================================
Traceback (most recent call last):
  File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/main.py", line 347, in <module>
    main()
  File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/main.py", line 223, in main
    result = tune.run(
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/tune.py", line 520, in run
    experiments[i] = Experiment(
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/experiment/experiment.py", line 163, in __init__
    self._run_identifier = Experiment.register_if_needed(run)
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/experiment/experiment.py", line 365, in register_if_needed
    raise type(e)(str(e) + " " + extra_msg) from None
TypeError: cannot pickle 'Event' object Other options: 
-Try reproducing the issue by calling `pickle.dumps(trainable)`. 
-If the error is typing-related, try removing the type annotations and try again.

my code follows the steps below:

   for cur_iter in range(args.n_tasks):
        
        
      
        if args.mode == "joint" and cur_iter > 0:
            return

        print("\n" + "#" * 50)
        print(f"# Task {cur_iter} iteration")
        print("#" * 50 + "\n")
        logger.info("[2-1] Prepare a datalist for the current task")

        task_acc = 0.0
        eval_dict = dict()

        # get datalist
        cur_train_datalist = get_train_datalist(args, cur_iter)
        cur_valid_datalist = get_valid_datalist(args, args.exp_name, cur_iter)
        cur_test_datalist = get_test_datalist(args, args.exp_name, cur_iter)

        logger.info("[2-2] Set environment for the current task")

        method.set_current_dataset(cur_train_datalist, cur_test_datalist, cur_valid_datalist)
    
        method.before_task(cur_train_datalist, cur_iter, args.init_model, args.init_opt, 
                           args.bayesian_model)

        # The way to handle streamed samples
        logger.info(f"[2-3] Start to train under {args.stream_env}")
        
        if args.stream_env == "offline" or args.mode == "joint" or args.mode == "gdumb":
            # Offline Train
       
            # -----------------------------------------------------------------------------------------------------------------
            # Ray Tune for the first task of the blurry case
            # -----------------------------------------------------------------------------------------------------------------
            if args.exp_name == "blurry10" and cur_iter==0:
                # configs has already been defined.
                
                configs={"lr": tune.loguniform(1e-4, 1e-1), "weight_decay":tune.uniform(1e-8, 1e-1)}
                hyperopt_search = HyperOptSearch(metric='accuracy', mode='max')
                #hyperopt_search = BayesOptSearch(metric='loss', mode='min',points_to_evaluat[{"lamda": 1}, {"lamda": 25}]
                scheduler = ASHAScheduler(
                    metric="accuracy",
                    mode="max",
                    max_t=100,
                    grace_period=5,
                    reduction_factor=2)
                
                reporter = CLIReporter(
                    parameter_columns=["lr", "wd"],
                    metric_columns=["loss", "accuracy", "training_iteration"]
                    )
               
                
               
                result = tune.run(
                                tune.with_parameters(method.find_hyperparametrs),
                                resources_per_trial={"cpu": 1, "gpu": 1},
                                config=configs,
                                num_samples=1,
                                search_alg=hyperopt_search,
                                scheduler=scheduler,
                                #keep_checkpoints_num=2,
                                checkpoint_score_attr="accuracy", 
                                progress_reporter=reporter
                                )

and the set_current_dataset() is:

    def set_current_dataset(self, train_datalist, test_datalist, valid_datalist):
        
        random.shuffle(train_datalist)
        self.prev_streamed_list = self.streamed_list
        self.streamed_list = train_datalist
        self.test_list = test_datalist
        # add validation set
        self.valid_list = valid_datalist

        # For ray tune - test
        self.train_loader, self.test_loader, self.valid_loader  = self.get_dataloader(
            self.batch_size, self.n_worker, train_list = random.shuffle(self.streamed_list), 
                            test_list=self.test_list, valid_list=self.valid_list)

def get_dataloader(self, batch_size, n_worker, train_list, test_list, valid_list):
        # Loader
        train_loader = None
        test_loader = None
        # add valid_loader 
        valid_loader = None

        if train_list is not None and len(train_list) > 0:
            train_dataset = ImageDataset(
                pd.DataFrame(train_list),
                dataset=self.dataset,
                transform=self.train_transform,
            )
            # drop last becasue of BatchNorm1D in IcarlNet
            train_loader = DataLoader(
                train_dataset,
                shuffle=True,
                batch_size=batch_size,
                num_workers=n_worker,
                drop_last=True,
                pin_memory=True,
            )

        if test_list is not None:
            test_dataset = ImageDataset(
                pd.DataFrame(test_list),
                dataset=self.dataset,
                transform=self.test_transform,
            )
            test_loader = DataLoader(
                test_dataset, shuffle=False, batch_size=batch_size, num_workers=n_worker, pin_memory=True
            )
       
        if valid_list is not None:
            valid_dataset = ImageDataset(
                pd.DataFrame(valid_list),
                dataset=self.dataset,
                transform=self.test_transform, # use the same transformation for the valid set as the test set
            )
            valid_loader = DataLoader(
                valid_dataset, shuffle=False, batch_size=batch_size, num_workers=n_worker, pin_memory=True
            )

        return train_loader, test_loader, valid_loader

an, finally, the trainable (I am not sure if this is the correct term) is as follows:

class RM(Finetune, tune.Trainable):
    def __init__(
        self, criterion, device, train_transform, test_transform, n_classes, **kwargs
    ):
        super().__init__(
            criterion, device, train_transform, test_transform, n_classes, **kwargs
        )
        
        self.batch_size = kwargs["batchsize"]
        self.n_worker = kwargs["n_worker"]
        self.exp_env = kwargs["stream_env"]
        self.bayesian = kwargs["bayesian_model"]
        self.pretrain = kwargs['pretrain']
        self.scheduler_name = kwargs["sched_name"]
        if kwargs["mem_manage"] == "default":
            self.mem_manage = "uncertainty"

   # --------------------------------------------------------------------------------------------------
   # For Ray Tune
   # --------------------------------------------------------------------------------------------------
    def find_hyperparametrs(self, config):
      

        #batch_size = self.batch_size
        n_worker = self.n_workers
        cur_iter = 0

        self.optimizer = select_optimizer(self.opt_name, config['lr'], config['weight_decay'], self.model, self.sched_name)

        # TRAIN
        eval_dict = dict()
        self.model = self.model.to(self.device)
        
        for epoch in range(self.n_epochs):
           
            # initialize for each task
            # optimizer.param_groups is a python list, which contains a dictionary.
            if self.scheduler_name == "cos":
                if epoch <= 0:  # Warm start of 1 epoch
                    for param_group in self.optimizer.param_groups:
                        # param_group is the dict inside the list and is the only item in this list.
                        if self.bayesian is True:
                            param_group["lr"] = self.lr *0.1  # self.lr * 0.1   this was changed due to inf error
                        else:
                            param_group["lr"] = self.lr * 0.1
                elif epoch == 1:  # Then set to maxlr
                    for param_group in self.optimizer.param_groups:
                        param_group["lr"] = self.lr
                else:  # Aand go!
                    if self.scheduler is not None:
                        self.scheduler.step()
            else:
                if self.scheduler is not None:
                    self.scheduler.step()

            # Training
            train_loss, train_acc = self._train(train_loader=self.train_loader, memory_loader=None,
                                                optimizer=self.optimizer, criterion=self.criterion)
            
            # Validation (validating over all the test sets seen so far)
            eval_dict_valid = self.evaluation(
                self.valid_loader, criterion=self.criterion
            )

            # Communicate with Ray tune
            with tune.checkpoint_dir(epoch) as checkpoint_dir: # what should be the checkpoint_dir will be?
                path = os.path.join(checkpoint_dir, "ray_checkpoints", "checkpoint")
                torch.save((self.model.state_dict(), self.optimizer.state_dict()), path)

            tune.report(
                loss=eval_dict_valid["avg_loss"], accuracy=eval_dict_valid["avg_acc"]
                )
            

            # Testing(testing over all the test sets seen so far)
            eval_dict = self.evaluation(
                test_loader=self.test_loader, criterion=self.criterion
            )
            
            # Report the results on the current epoch
            logger.info(
                f"Task {cur_iter} | Epoch {epoch+1}/{self.n_epochs} | train_loss {train_loss:.4f} | train_acc {train_acc:.4f} | "
                f"test_loss {eval_dict['avg_loss']:.4f} | test_acc {eval_dict['avg_acc']:.4f} | "
                f"valid_loss {eval_dict_valid['avg_loss']:.4f} | valid_acc {eval_dict_valid['avg_acc']:.4f} | "
                f"lr {self.optimizer.param_groups[0]['lr']:.4f}"
            )

def update_model(self, x, y, criterion, optimizer):
        # chekc the label type, output of the bayesian model
        
        optimizer.zero_grad()
        do_cutmix = self.cutmix and np.random.rand(1) < 0.5
        if do_cutmix:
            x, labels_a, labels_b, lam = cutmix_data(x=x, y=y, alpha=1.0)
            '''
            x = x.double()
            labels_a = labels_a.double()
            labels_b = labels_b.double()
            '''
            # take care of the output of the bayesian model and its probabilistic loss
            if self.bayesian:
                #self.model.double()
                logit_dict = self.model(x)

                loss = lam * criterion(logit_dict, labels_a)['total_loss'] + (1 - lam) * criterion(
                    logit_dict, labels_b)['total_loss']
                #loss = losses_dict['total_loss']
                logit = criterion(logit_dict, labels_a)['prediction']
                logit = logit.mean(dim=2)
            else:
                #self.model.double()
                logit = self.model(x)
                loss = lam * criterion(logit, labels_a) + (1 - lam) * criterion(
                    logit, labels_b
                )
        else:
            
            if self.bayesian:
                # measure forward pass time
                #t_start = time.time()
                #self.model.double()
                logit_dict = self.model(x)
                #t_end = time.time() - t_start
                # logger.info(f'forward pass time: {t_end:.2f} s')

                # criterion is the probabilistic loss class
                #t_s = time.time()
                losses_dict = criterion(logit_dict, y)
                #t_e = time.time() - t_s
                #logger.info(f'loss time: {t_e:.2f} s')
                
                loss = losses_dict['total_loss']
                logit = losses_dict['prediction'] # Shape: torch.Size([10, 10, 64]) --> (batch_size, num_classes, samples)
                # change the shape of the logit to be (batch_size, num_classes)
                logit = logit.mean(dim=2)
            else:
                #self.model.double()
                logit = self.model(x)
                loss = criterion(logit, y)
        
        # calculate the number of correct predictions per batch for the bayesian model as well here
        _, preds = logit.topk(self.topk, 1, True, True)

        loss.backward()
        ''' ToDo: is it necessary to clip the gradient? it was done in mnvi code
        Maybe they didn't need it but I'm not sure. For the Bayesian case, it is probably needed.
        '''
        if self.bayesian:
            torch.nn.utils.clip_grad_norm_(self.model.parameters(), 0.1, norm_type='inf')
        
        optimizer.step()
        return loss.item(), torch.sum(preds == y.unsqueeze(1)).item(), y.size(0)

    def _train(
        self, train_loader, memory_loader, optimizer, criterion
    ):
        
        total_loss, correct, num_data = 0.0, 0.0, 0.0

        self.model.train()
        if memory_loader is not None and train_loader is not None:
            data_iterator = zip(train_loader, cycle(memory_loader))
        elif memory_loader is not None:
            data_iterator = memory_loader
        elif train_loader is not None:
            data_iterator = train_loader
        else:
            raise NotImplementedError("None of dataloder is valid")
        
        for i, data in enumerate(data_iterator):
            if len(data) == 2:
                stream_data, mem_data = data
                x = torch.cat([stream_data["image"], mem_data["image"]])
                y = torch.cat([stream_data["label"], mem_data["label"]])
            else:
                x = data["image"]
                y = data["label"]
            # set to double
            #x = x.double().to(self.device)
            #y = y.double().to(self.device)

            x = x.to(self.device)
            y = y.to(self.device)

            '''
            all_model, _ = self.measure_time(self.model, x)
            print('all_model', all_model)
            '''
            # measure each operation time of the forward pass for one batch
            # ---------------------------------------------------
           
            # ------------------------------------------------------
            # this is equivalent to the step code in the test repo
            l, c, d = self.update_model(x, y, criterion, optimizer)
            # Compute the moving averages - equivalent to MovingAverage in the test repo
            total_loss += l
            correct += c
            num_data += d

        if train_loader is not None:
            n_batches = len(train_loader)
        else:
            n_batches = len(memory_loader)

        return total_loss / n_batches, correct / num_data

    def allocate_batch_size(self, n_old_class, n_new_class):
        new_batch_size = int(
            self.batch_size * n_new_class / (n_old_class + n_new_class)
        )
        old_batch_size = self.batch_size - new_batch_size
        return new_batch_size, old_batch_size
            

I am not sure which object here is causing the problem and any guidance is very much appreciated.

Can you post a runnable example?
I don’t see the definition of train_func. But the error message is saying the Event object captured by the train)_func is not pickleable.

I share with you those parts that lead to a reproducible version. It is a big repo so I try to cut it short.
This is the link to the repository and you can add the datasets from there.

this is the main:

"""
rainbow-memory
Copyright 2021-present NAVER Corp.
GPLv3
"""
import datetime
import logging.config
import os
import random
from collections import defaultdict

import numpy as np
import torch
torch.use_deterministic_algorithms(True, warn_only=True)
from randaugment import RandAugment
from torch import nn
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms

from configuration import config
from utils.augment import Cutout, select_autoaugment
from utils.data_loader import get_test_datalist, get_statistics
from utils.data_loader import get_train_datalist, get_valid_datalist
from utils.method_manager import select_method

# add ray tune 
import ray
from ray import tune
from ray.tune import CLIReporter
from ray.tune.schedulers import ASHAScheduler
from ray.tune.search.hyperopt import HyperOptSearch
from ray.tune.search.bayesopt import BayesOptSearch

import pdb 

def main():
    args = config.base_parser()
    timestamp = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
    # Save file name
    tr_names = ""
    for trans in args.transforms:
        tr_names += "_" + trans
    save_path = f"{args.dataset}/{timestamp}_{args.mode}_{args.mem_manage}_{args.stream_env}_msz{args.memory_size}_rnd{args.rnd_seed}{tr_names}"

    logging.config.fileConfig("./configuration/logging.conf")
    logger = logging.getLogger()

    os.makedirs(f"/logs/{args.dataset}", exist_ok=True)
    fileHandler = logging.FileHandler("/logs/{}.log".format(save_path), mode="w")
    formatter = logging.Formatter(
        "[%(levelname)s] %(filename)s:%(lineno)d > %(message)s"
    )
    fileHandler.setFormatter(formatter)
    logger.addHandler(fileHandler)

    #writer = SummaryWriter("/tensorboard")

    if torch.cuda.is_available():
        device = torch.device("cuda")
    else:
        device = torch.device("cpu")
    logger.info(f"Set the device ({device})")

    # Fix the random seeds
    # https://hoya012.github.io/blog/reproducible_pytorch/
    torch.manual_seed(args.rnd_seed)
    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.benchmark = False
    np.random.seed(args.rnd_seed)
    random.seed(args.rnd_seed)

    # Transform Definition
    mean, std, n_classes, inp_size, _ = get_statistics(dataset=args.dataset)
    train_transform = []
    if "cutout" in args.transforms:
        train_transform.append(Cutout(size=16))
    if "randaug" in args.transforms:
        train_transform.append(RandAugment())
    if "autoaug" in args.transforms:
        train_transform.append(select_autoaugment(args.dataset))

    train_transform = transforms.Compose(
        [
            transforms.Resize((inp_size, inp_size)),
            transforms.RandomCrop(inp_size, padding=4),
            transforms.RandomHorizontalFlip(),
            *train_transform,
            transforms.ToTensor(),
            transforms.Normalize(mean, std),
        ]
    )
    logger.info(f"Using train-transforms {train_transform}")

    test_transform = transforms.Compose(
        [
            transforms.Resize((inp_size, inp_size)),
            transforms.ToTensor(),
            transforms.Normalize(mean, std),
        ]
    )

    logger.info(f"[1] Select a CIL method ({args.mode})")
    criterion = nn.CrossEntropyLoss(reduction="mean")
    method = select_method(
        args, criterion, device, train_transform, test_transform, n_classes
    )

    logger.info(f"[2] Incrementally training {args.n_tasks} tasks")
    task_records = defaultdict(list)

    for cur_iter in range(args.n_tasks):

        # ----------------------------------------
        # TENSRBOARD
        # ---------------------------------------
        f = '/tensorboard/RM_14_*/'+'task_' + str(cur_iter)
        writer = SummaryWriter(f)
        # ----------------------------------------


        if args.mode == "joint" and cur_iter > 0:
            return

        print("\n" + "#" * 50)
        print(f"# Task {cur_iter} iteration")
        print("#" * 50 + "\n")
        logger.info("[2-1] Prepare a datalist for the current task")

        task_acc = 0.0
        eval_dict = dict()

        
        # get datalist
        cur_train_datalist = get_train_datalist(args, cur_iter)
        cur_valid_datalist = get_valid_datalist(args, args.exp_name, cur_iter)
        cur_test_datalist = get_test_datalist(args, args.exp_name, cur_iter)

        # Reduce datalist in Debug mode
        if args.debug:
            random.shuffle(cur_train_datalist)
            random.shuffle(cur_test_datalist)
            cur_train_datalist = cur_train_datalist[:2560]
            cur_test_datalist = cur_test_datalist[:2560]

        logger.info("[2-2] Set environment for the current task")
        method.set_current_dataset(cur_train_datalist, cur_test_datalist, cur_valid_datalist)
        # Increment known class for current task iteration.
        method.before_task(cur_train_datalist, cur_iter, args.init_model, args.init_opt)

        # The way to handle streamed samles
        logger.info(f"[2-3] Start to train under {args.stream_env}")

        if args.stream_env == "offline" or args.mode == "joint" or args.mode == "gdumb":
            # Offline Train
                        # -----------------------------------------------------------------------------------------------------------------
            # Ray Tune for the first task of the blurry case
            # -----------------------------------------------------------------------------------------------------------------
            if args.exp_name == "blurry10" and cur_iter==0:
                # configs has already been defined.
                
                configs={"lr": tune.loguniform(1e-4, 1e-1), 
                    "weight_decay":tune.uniform(1e-8, 1e-1)
                    }
                hyperopt_search = HyperOptSearch(metric='accuracy', mode='max')
                #hyperopt_search = BayesOptSearch(metric='loss', mode='min',points_to_evaluate=[{"lamda": 1}, {"lamda": 25}]
                scheduler = ASHAScheduler(
                    metric="accuracy",
                    mode="max",
                    max_t=100,
                    grace_period=5,
                    reduction_factor=2)
                
                reporter = CLIReporter(
                    parameter_columns=["lr", "wd"],
                    metric_columns=["loss", "accuracy", "training_iteration"]
                    )
                pdb.set_trace()
                
                #pickle.dumps(tune.with_parameters(method.find_hyperparametrs))
                result = tune.run(
                                tune.with_parameters(method.find_hyperparametrs),
                                resources_per_trial={"cpu": 1, "gpu": 1},
                                config=configs,
                                num_samples=1,
                                search_alg=hyperopt_search,
                                scheduler=scheduler,
                                #keep_checkpoints_num=2,
                                checkpoint_score_attr="accuracy", 
                                progress_reporter=reporter
                                )
                
                best_trial = result.get_best_trial("accuracy", "max", "last")
                logging.info("Best trial config: {}".format(best_trial.configs))
                logging.info("Best trial final validation loss: {}".format(best_trial.last_result["loss"]))
                logging.info("Best trial final validation accuracy: {}".format(best_trial.last_result["accuracy"]))

                best_checkpoint = result.get_best_checkpoint(trial=best_trial, metric="accuracy", mode="max")
                best_checkpoint_dir = best_checkpoint.to_directory(path="directory")
                model_state, optimizer_state = torch.load(os.path.join(best_checkpoint_dir, "checkpoint"))
                #best_trained_model = model._network
                #best_trained_model.load_state_dict(model_state)
                method.self.model.load_state_dict(model_state)
                #model.check_fisher()
                method.self.model.load_state_dict(model_state)

                TEST_LOADER = method.get_dataloader(batch_size=args.batch_size, n_worker=args.n_worker, train_list=None, test_list=method.self.test_list, valid_list=None)
                test_dictionary=method.evaluation(TEST_LOADER, method.self.criterion)
                print("Best trial test set accuracy: {}".format(test_dictionary["avg_acc"]))

                
            else:

                # add the condition for the first task in the blurry set-up - ray tune
                task_acc, eval_dict = method.train(
                    cur_iter=cur_iter,
                    n_epoch=args.n_epoch,
                    batch_size=args.batchsize,
                    n_worker=args.n_worker,
                    writer=writer
                )
                if args.mode == "joint":
                    logger.info(f"joint accuracy: {task_acc}")

        elif args.stream_env == "online":
            # Online Train
            logger.info("Train over streamed data once")
            method.train(
                cur_iter=cur_iter,
                n_epoch=1,
                batch_size=args.batchsize,
                n_worker=args.n_worker,
                writer=writer
            )

            method.update_memory(cur_iter)

            # No stremed training data, train with only memory_list
            method.set_current_dataset([], cur_test_datalist, cur_valid_datalist)

            logger.info("Train over memory")
            task_acc, eval_dict = method.train(
                cur_iter=cur_iter,
                n_epoch=args.n_epoch,
                batch_size=args.batchsize,
                n_worker=args.n_worker,
                writer=writer
            )

            method.after_task(cur_iter)

        logger.info("[2-4] Update the information for the current task")
        method.after_task(cur_iter)
        task_records["task_acc"].append(task_acc)
        # task_records['cls_acc'][k][j] = break down j-class accuracy from 'task_acc'
        task_records["cls_acc"].append(eval_dict["cls_acc"])

        # I can add the checkpoints here but I don't think for the baseline it is necessary 
        '''ToDo
        '''
        # Notify to NSML
        logger.info("[2-5] Report task result")
        writer.add_scalar("Metrics/TaskAcc", task_acc, cur_iter)
    

    np.save(f"results/{save_path}.npy", task_records["cls_acc"])

    # Accuracy (A)
    A_avg = np.mean(task_records["task_acc"]) # this should be the avg of the last accutacy on the test set! 
    A_last = task_records["task_acc"][args.n_tasks - 1]

    # Forgetting (F)
    acc_arr = np.array(task_records["cls_acc"])
    # cls_acc = (k, j), acc for j at k
    cls_acc = acc_arr.reshape(-1, args.n_cls_a_task).mean(1).reshape(args.n_tasks, -1)
    for k in range(args.n_tasks):
        forget_k = []
        for j in range(args.n_tasks):
            if j < k:
                forget_k.append(cls_acc[:k, j].max() - cls_acc[k, j])
            else:
                forget_k.append(None)
        task_records["forget"].append(forget_k)
    F_last = np.mean(task_records["forget"][-1][:-1])

    # Intrasigence (I)
    I_last = args.joint_acc - A_last

    logger.info(f"======== Summary =======")
    logger.info(f"A_last {A_last} | A_avg {A_avg} | F_last {F_last} | I_last {I_last}")


if __name__ == "__main__":
    main()

You need to add one more argument to the config.py file in the repo:

parser.add_argument("--weight_decay", type=float, default=0.0, help="weight_decay")

and also add it to the experiment.sh

i added the so called trainable to the methods/rainbow_memory.py. Here is the changed file:

"""
rainbow-memory
Copyright 2021-present NAVER Corp.
GPLv3
"""
import logging
import random

import numpy as np
import pandas as pd
import torch
torch.use_deterministic_algorithms(True, warn_only=True)
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

# Ray Tune
import ray
from ray.tune import Trainable 
from ray import tune
from utils.train_utils import select_model, select_optimizer

from utils.early_stopping import EarlyStopping
from methods.finetune import Finetune
from utils.data_loader import cutmix_data, ImageDataset
import time 
import pdb



logger = logging.getLogger()
#writer = SummaryWriter("tensorboard")

def cycle(iterable):
    # iterate with shuffling
    while True:
        for i in iterable:
            yield i

this class inherits from methods/finetune.py

"""
rainbow-memory
Copyright 2021-present NAVER Corp.
GPLv3
"""
# When we make a new one, we should inherit the Finetune class.
import logging
import os
import random

import PIL
import numpy as np
import pandas as pd
from utils.early_stopping import EarlyStopping
import torch
torch.use_deterministic_algorithms(True, warn_only=True)
import torch.nn as nn
from randaugment.randaugment import RandAugment
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms
import  torchsummary 

from utils.augment import Cutout, Invert, Solarize, select_autoaugment
from utils.data_loader import ImageDataset
from utils.data_loader import cutmix_data
from utils.train_utils import select_model, select_optimizer
import pdb

logger = logging.getLogger()
#writer = SummaryWriter("tensorboard")


class ICaRLNet(nn.Module):
    def __init__(self, model, feature_size, n_class):
        super().__init__()
        self.model = model
        self.bn = nn.BatchNorm1d(feature_size, momentum=0.01)
        self.ReLU = nn.ReLU()
        self.fc = nn.Linear(feature_size, n_class, bias=False)

    def forward(self, x):
        x = self.model(x)
        x = self.bn(x)
        x = self.ReLU(x)
        x = self.fc(x)
        return x


class Finetune:
    def __init__(
        self, criterion, device, train_transform, test_transform, n_classes, **kwargs
    ):
        self.num_learned_class = 0
        self.num_learning_class = kwargs["n_init_cls"]
        self.n_classes = n_classes
        self.learned_classes = []
        self.class_mean = [None] * n_classes
        self.exposed_classes = []
        self.seen = 0
        self.topk = kwargs["topk"]

        self.device = device
        self.criterion = criterion
        self.dataset = kwargs["dataset"]
        self.model_name = kwargs["model_name"]
        self.opt_name = kwargs["opt_name"]
        self.sched_name = kwargs["sched_name"]
        self.lr = kwargs["lr"]
        self.weight_decay = kwargs['weight_decay']
        self.feature_size = kwargs["feature_size"]

        self.train_transform = train_transform
        self.cutmix = "cutmix" in kwargs["transforms"]
        self.test_transform = test_transform

        self.prev_streamed_list = []
        self.streamed_list = []
        self.test_list = []
         # add valid list 
        self.valid_list = []
        self.memory_list = []
        self.memory_size = kwargs["memory_size"]
        self.mem_manage = kwargs["mem_manage"]
        if kwargs["mem_manage"] == "default":
            self.mem_manage = "random"

        self.model = select_model(self.model_name, self.dataset, kwargs["n_init_cls"])
        self.model = self.model.to(self.device)
        # summary of the model
        result = torchsummary.summary(self.model, (3, 32, 32), 16)
        print(result)
        '''
         with open("summary_gdumb.txt", "w") as f:
                f.write(result)
        
        '''
      

        # ------------------------------------------
        # dtype
        # ------------------------------------------
        '''
        for name, param in self.model.named_parameters():
            print(f"{name}: {param.dtype}")
        '''
        # -------------------------------------------        
        self.criterion = self.criterion.to(self.device)

        self.already_mem_update = False

        self.mode = kwargs["mode"]

        self.uncert_metric = kwargs["uncert_metric"]

        # cuda events for measuring CUDA time? 
        self.start_event = torch.cuda.Event(enable_timing=True)
        self.end_event = torch.cuda.Event(enable_timing=True)
        self.early_stopping = kwargs["early_stopping"]
        if self.early_stopping is True:
            logger.info("early_stopping is set to True")
        else:
            logger.info("early_stopping is set to False")

    def set_current_dataset(self, train_datalist, test_datalist, valid_datalist):
        random.shuffle(train_datalist)
        self.prev_streamed_list = self.streamed_list
        self.streamed_list = train_datalist
        self.test_list = test_datalist
        # add validation set
        self.valid_list = valid_datalist


        # For ray tune - test
        self.train_loader, self.test_loader, self.valid_loader  = self.get_dataloader(
            self.batch_size, self.n_worker, train_list = random.shuffle(self.streamed_list), 
                            test_list=self.test_list, valid_list=self.valid_list)

    def before_task(self, datalist, cur_iter, init_model=False, init_opt=True):
        logger.info("Apply before_task")
        incoming_classes = pd.DataFrame(datalist)["klass"].unique().tolist()
        self.exposed_classes = list(set(self.learned_classes + incoming_classes))
        self.num_learning_class = max(
            len(self.exposed_classes), self.num_learning_class
        )

        if self.mem_manage == "prototype":
            self.model.fc = nn.Linear(self.model.fc.in_features, self.feature_size)
            self.feature_extractor = self.model
            self.model = ICaRLNet(
                self.feature_extractor, self.feature_size, self.num_learning_class
            )

        in_features = self.model.fc.in_features
        out_features = self.model.fc.out_features
        # To care the case of decreasing head
        new_out_features = max(out_features, self.num_learning_class)
        if init_model:
            # init model parameters in every iteration
            logger.info("Reset model parameters")
            self.model = select_model(self.model_name, self.dataset, new_out_features)
        else:
            self.model.fc = nn.Linear(in_features, new_out_features)
            
        self.params = {
            n: p for n, p in list(self.model.named_parameters())[:-2] if p.requires_grad
        }  # For regularzation methods
        self.model = self.model.to(self.device)

        if init_opt:
            # reinitialize the optimizer and scheduler
            logger.info("Reset the optimizer and scheduler states")
            self.optimizer, self.scheduler = select_optimizer(
                self.opt_name, self.lr, self.weight_decay, self.model, self.sched_name
            )

        logger.info(f"Increasing the head of fc {out_features} -> {new_out_features}")

        self.already_mem_update = False

    def after_task(self, cur_iter):
        logger.info("Apply after_task")
        self.learned_classes = self.exposed_classes
        self.num_learned_class = self.num_learning_class
        self.update_memory(cur_iter)

    def update_memory(self, cur_iter, num_class=None):
        if num_class is None:
            num_class = self.num_learning_class

        if not self.already_mem_update:
            logger.info(f"Update memory over {num_class} classes by {self.mem_manage}")
            candidates = self.streamed_list + self.memory_list
            if len(candidates) <= self.memory_size:
                self.memory_list = candidates
                self.seen = len(candidates)
                logger.warning("Candidates < Memory size")
            else:
                if self.mem_manage == "random":
                    self.memory_list = self.rnd_sampling(candidates)
                elif self.mem_manage == "random_balanced":
                    self.memory_list = self.equal_class_sampling(
                            candidates, num_class
                        )
                elif self.mem_manage == "reservoir":
                    self.reservoir_sampling(self.streamed_list)
                elif self.mem_manage == "prototype":
                    self.memory_list = self.mean_feature_sampling(
                        exemplars=self.memory_list,
                        samples=self.streamed_list,
                        num_class=num_class,
                    )
                elif self.mem_manage == "uncertainty":
                    if cur_iter == 0:
                        self.memory_list = self.equal_class_sampling(
                            candidates, num_class
                        )
                    else:
                        self.memory_list = self.uncertainty_sampling(
                            candidates,
                            num_class=num_class,
                        )
                else:
                    logger.error("Not implemented memory management")
                    raise NotImplementedError

            assert len(self.memory_list) <= self.memory_size
            logger.info("Memory statistic")
            memory_df = pd.DataFrame(self.memory_list)
            logger.info(f"\n{memory_df.klass.value_counts(sort=True)}")
            # memory update happens only once per task iteratin.
            self.already_mem_update = True
        else:
            logger.warning(f"Already updated the memory during this iter ({cur_iter})")

    def get_dataloader(self, batch_size, n_worker, train_list, test_list, valid_list):
        # Loader
        train_loader = None
        test_loader = None
        # add valid_loader
        valid_loader = None

        if train_list is not None and len(train_list) > 0:
            train_dataset = ImageDataset(
                pd.DataFrame(train_list),
                dataset=self.dataset,
                transform=self.train_transform,
            )
            # drop last becasue of BatchNorm1D in IcarlNet
            train_loader = DataLoader(
                train_dataset,
                shuffle=True,
                batch_size=batch_size,
                num_workers=n_worker,
                drop_last=True,
            )

        if test_list is not None:
            test_dataset = ImageDataset(
                pd.DataFrame(test_list),
                dataset=self.dataset,
                transform=self.test_transform,
            )
            test_loader = DataLoader(
                test_dataset, shuffle=False, batch_size=batch_size, num_workers=n_worker
            )

        if valid_list is not None:
            valid_dataset = ImageDataset(
                pd.DataFrame(valid_list),
                dataset=self.dataset,
                transform=self.test_transform, # use the same transformation for the valid set as the test set
            )
            valid_loader = DataLoader(
                valid_dataset, shuffle=False, batch_size=batch_size, num_workers=n_worker, pin_memory=True)

        return train_loader, test_loader, valid_loader

    def train(self, cur_iter, n_epoch, batch_size, n_worker, n_passes=1):

        train_list = self.streamed_list + self.memory_list
        random.shuffle(train_list)
        test_list = self.test_list
        valid_set = self.valid_set
        train_loader, test_loader, valid_loader = self.get_dataloader(
            batch_size, n_worker, train_list, test_list, valid_set
        )

        logger.info(f"Streamed samples: {len(self.streamed_list)}")
        logger.info(f"In-memory samples: {len(self.memory_list)}")
        logger.info(f"Train samples: {len(train_list)}")
        logger.info(f"Test samples: {len(test_list)}")

        # TRAIN
        best_acc = 0.0
        eval_dict = dict()
        for epoch in range(n_epoch):
            # https://github.com/drimpossible/GDumb/blob/master/src/main.py
            # initialize for each task
            if epoch <= 0:  # Warm start of 1 epoch
                for param_group in self.optimizer.param_groups:
                    param_group["lr"] = self.lr * 0.1
            elif epoch == 1:  # Then set to maxlr
                for param_group in self.optimizer.param_groups:
                    param_group["lr"] = self.lr
            else:  # Aand go!
                self.scheduler.step()

            train_loss, train_acc = self._train(
                train_loader=train_loader,
                optimizer=self.optimizer,
                criterion=self.criterion,
                epoch=epoch,
                total_epochs=n_epoch,
                n_passes=n_passes,
            )

            eval_dict = self.evaluation(
                test_loader=test_loader, criterion=self.criterion
            )
            
            '''
            writer.add_scalar(f"task{cur_iter}/train/loss", train_loss, epoch)
            writer.add_scalar(f"task{cur_iter}/train/acc", train_acc, epoch)
            writer.add_scalar(f"task{cur_iter}/test/loss", eval_dict["avg_loss"], epoch)
            writer.add_scalar(f"task{cur_iter}/test/acc", eval_dict["avg_acc"], epoch)
            writer.add_scalar(
                f"task{cur_iter}/train/lr", self.optimizer.param_groups[0]["lr"], epoch
            )
            '''
            logger.info(
                f"Task {cur_iter} | Epoch {epoch+1}/{n_epoch} | train_loss {train_loss:.4f} | train_acc {train_acc:.4f} | "
                f"test_loss {eval_dict['avg_loss']:.4f} | test_acc {eval_dict['avg_acc']:.4f} | "
                f"lr {self.optimizer.param_groups[0]['lr']:.4f}"
            )

            best_acc = max(best_acc, eval_dict["avg_acc"])

        return best_acc, eval_dict

    def _train(
        self, train_loader, optimizer, criterion, epoch, total_epochs, n_passes=1
    ):
        total_loss, correct, num_data = 0.0, 0.0, 0.0
        self.model.train()
        for i, data in enumerate(train_loader):
            for pass_ in range(n_passes):
                x = data["image"]
                y = data["label"]
                x = x.to(self.device)
                y = y.to(self.device)

                optimizer.zero_grad()

                do_cutmix = self.cutmix and np.random.rand(1) < 0.5
                if do_cutmix:
                    x, labels_a, labels_b, lam = cutmix_data(x=x, y=y, alpha=1.0)
                    logit = self.model(x)
                    loss = lam * criterion(logit, labels_a) + (1 - lam) * criterion(
                        logit, labels_b
                    )
                else:
                    logit = self.model(x)
                    loss = criterion(logit, y)
                _, preds = logit.topk(self.topk, 1, True, True)

                loss.backward()
                optimizer.step()
                total_loss += loss.item()
                correct += torch.sum(preds == y.unsqueeze(1)).item()
                num_data += y.size(0)

        n_batches = len(train_loader)

        return total_loss / n_batches, correct / num_data

    def evaluation_ext(self, test_list):
        # evaluation from out of class
        test_dataset = ImageDataset(
            pd.DataFrame(test_list),
            dataset=self.dataset,
            transform=self.test_transform,
        )
        test_loader = DataLoader(
            test_dataset, shuffle=False, batch_size=32, num_workers=2
        )
        eval_dict = self.evaluation(test_loader, self.criterion)

        return eval_dict

    def evaluation(self, test_loader, criterion):
        total_correct, total_num_data, total_loss = 0.0, 0.0, 0.0
        correct_l = torch.zeros(self.n_classes)
        num_data_l = torch.zeros(self.n_classes)
        label = []

        self.model.eval()
        with torch.no_grad():
            for i, data in enumerate(test_loader):
                x = data["image"]
                y = data["label"]
                x = x.to(self.device)
                y = y.to(self.device)
                logit = self.model(x)

                loss = criterion(logit, y)
                pred = torch.argmax(logit, dim=-1)
                _, preds = logit.topk(self.topk, 1, True, True)

                total_correct += torch.sum(preds == y.unsqueeze(1)).item()
                total_num_data += y.size(0)

                xlabel_cnt, correct_xlabel_cnt = self._interpret_pred(y, pred)
                correct_l += correct_xlabel_cnt.detach().cpu()
                num_data_l += xlabel_cnt.detach().cpu()

                total_loss += loss.item()
                label += y.tolist()

        avg_acc = total_correct / total_num_data
        avg_loss = total_loss / len(test_loader)
        cls_acc = (correct_l / (num_data_l + 1e-5)).numpy().tolist()
        ret = {"avg_loss": avg_loss, "avg_acc": avg_acc, "cls_acc": cls_acc}

        return ret

    def _interpret_pred(self, y, pred):
        # xlable is batch
        ret_num_data = torch.zeros(self.n_classes)
        ret_corrects = torch.zeros(self.n_classes)

        xlabel_cls, xlabel_cnt = y.unique(return_counts=True)
        for cls_idx, cnt in zip(xlabel_cls, xlabel_cnt):
            ret_num_data[cls_idx] = cnt

        correct_xlabel = y.masked_select(y == pred)
        correct_cls, correct_cnt = correct_xlabel.unique(return_counts=True)
        for cls_idx, cnt in zip(correct_cls, correct_cnt):
            ret_corrects[cls_idx] = cnt

        return ret_num_data, ret_corrects

    def rnd_sampling(self, samples):
        random.shuffle(samples)
        return samples[: self.memory_size]

    def reservoir_sampling(self, samples):
        for sample in samples:
            if len(self.memory_list) < self.memory_size:
                self.memory_list += [sample]
            else:
                j = np.random.randint(0, self.seen)
                if j < self.memory_size:
                    self.memory_list[j] = sample
            self.seen += 1

    def mean_feature_sampling(self, exemplars, samples, num_class):
        """Prototype sampling

        Args:
            features ([Tensor]): [features corresponding to the samples]
            samples ([Datalist]): [datalist for a class]

        Returns:
            [type]: [Sampled datalist]
        """

        def _reduce_exemplar_sets(exemplars, mem_per_cls):
            if len(exemplars) == 0:
                return exemplars

            exemplar_df = pd.DataFrame(exemplars)
            ret = []
            for y in range(self.num_learned_class):
                cls_df = exemplar_df[exemplar_df["label"] == y]
                ret += cls_df.sample(n=min(mem_per_cls, len(cls_df))).to_dict(
                    orient="records"
                )

            num_dups = pd.DataFrame(ret).duplicated().sum()
            if num_dups > 0:
                logger.warning(f"Duplicated samples in memory: {num_dups}")

            return ret

        mem_per_cls = self.memory_size // num_class
        exemplars = _reduce_exemplar_sets(exemplars, mem_per_cls)
        old_exemplar_df = pd.DataFrame(exemplars)

        new_exemplar_set = []
        sample_df = pd.DataFrame(samples)
        for y in range(self.num_learning_class):
            cls_samples = []
            cls_exemplars = []
            if len(sample_df) != 0:
                cls_samples = sample_df[sample_df["label"] == y].to_dict(
                    orient="records"
                )
            if len(old_exemplar_df) != 0:
                cls_exemplars = old_exemplar_df[old_exemplar_df["label"] == y].to_dict(
                    orient="records"
                )

            if len(cls_exemplars) >= mem_per_cls:
                new_exemplar_set += cls_exemplars
                continue

            # Assign old exemplars to the samples
            cls_samples += cls_exemplars
            if len(cls_samples) <= mem_per_cls:
                new_exemplar_set += cls_samples
                continue

            features = []
            self.feature_extractor.eval()
            with torch.no_grad():
                for data in cls_samples:
                    image = PIL.Image.open(
                        os.path.join("dataset", self.dataset, data["file_name"])
                    ).convert("RGB")
                    x = self.test_transform(image).to(self.device)
                    feature = (
                        self.feature_extractor(x.unsqueeze(0)).detach().cpu().numpy()
                    )
                    feature = feature / np.linalg.norm(feature, axis=1)  # Normalize
                    features.append(feature.squeeze())

            features = np.array(features)
            logger.debug(f"[Prototype] features: {features.shape}")

            # do not replace the existing class mean
            if self.class_mean[y] is None:
                cls_mean = np.mean(features, axis=0)
                cls_mean /= np.linalg.norm(cls_mean)
                self.class_mean[y] = cls_mean
            else:
                cls_mean = self.class_mean[y]
            assert cls_mean.ndim == 1

            phi = features
            mu = cls_mean
            # select exemplars from the scratch
            exemplar_features = []
            num_exemplars = min(mem_per_cls, len(cls_samples))
            for j in range(num_exemplars):
                S = np.sum(exemplar_features, axis=0)
                mu_p = 1.0 / (j + 1) * (phi + S)
                mu_p = mu_p / np.linalg.norm(mu_p, axis=1, keepdims=True)

                dist = np.sqrt(np.sum((mu - mu_p) ** 2, axis=1))
                i = np.argmin(dist)

                new_exemplar_set.append(cls_samples[i])
                exemplar_features.append(phi[i])

                # Avoid to sample the duplicated one.
                del cls_samples[i]
                phi = np.delete(phi, i, 0)

        return new_exemplar_set

    def uncertainty_sampling(self, samples, num_class):
        """uncertainty based sampling

        Args:
            samples ([list]): [training_list + memory_list]
        """
        self.montecarlo(samples, uncert_metric=self.uncert_metric)

        sample_df = pd.DataFrame(samples)
        mem_per_cls = self.memory_size // num_class

        ret = []
        for i in range(num_class):
            cls_df = sample_df[sample_df["label"] == i]
            if len(cls_df) <= mem_per_cls:
                ret += cls_df.to_dict(orient="records")
            else:
                jump_idx = len(cls_df) // mem_per_cls
                uncertain_samples = cls_df.sort_values(by="uncertainty")[::jump_idx]
                ret += uncertain_samples[:mem_per_cls].to_dict(orient="records")

        num_rest_slots = self.memory_size - len(ret)
        if num_rest_slots > 0:
            logger.warning("Fill the unused slots by breaking the equilibrium.")
            ret += (
                sample_df[~sample_df.file_name.isin(pd.DataFrame(ret).file_name)]
                .sample(n=num_rest_slots)
                .to_dict(orient="records")
            )

        num_dups = pd.DataFrame(ret).file_name.duplicated().sum()
        if num_dups > 0:
            logger.warning(f"Duplicated samples in memory: {num_dups}")

        return ret

    def _compute_uncert(self, infer_list, infer_transform, uncert_name):
        batch_size = 32
        infer_df = pd.DataFrame(infer_list)
        infer_dataset = ImageDataset(
            infer_df, dataset=self.dataset, transform=infer_transform
        )
        infer_loader = DataLoader(
            infer_dataset, shuffle=False, batch_size=batch_size, num_workers=2
        )

        self.model.eval()
        with torch.no_grad():
            for n_batch, data in enumerate(infer_loader):
                x = data["image"]
                x = x.to(self.device)
                logit = self.model(x)
                logit = logit.detach().cpu()

                for i, cert_value in enumerate(logit):
                    sample = infer_list[batch_size * n_batch + i]
                    sample[uncert_name] = 1 - cert_value

    def montecarlo(self, candidates, uncert_metric="vr"):
        transform_cands = []
        logger.info(f"Compute uncertainty by {uncert_metric}!")
        if uncert_metric == "vr":
            transform_cands = [
                Cutout(size=8),
                Cutout(size=16),
                Cutout(size=24),
                Cutout(size=32),
                transforms.RandomHorizontalFlip(),
                transforms.RandomVerticalFlip(),
                transforms.RandomRotation(45),
                transforms.RandomRotation(90),
                Invert(),
                Solarize(v=128),
                Solarize(v=64),
                Solarize(v=32),
            ]
        elif uncert_metric == "vr_randaug":
            for _ in range(12):
                transform_cands.append(RandAugment())
        elif uncert_metric == "vr_cutout":
            transform_cands = [Cutout(size=16)] * 12
        elif uncert_metric == "vr_autoaug":
            transform_cands = [select_autoaugment(self.dataset)] * 12

        n_transforms = len(transform_cands)

        for idx, tr in enumerate(transform_cands):
            _tr = transforms.Compose([tr] + self.test_transform.transforms)
            self._compute_uncert(candidates, _tr, uncert_name=f"uncert_{str(idx)}")

        for sample in candidates:
            self.variance_ratio(sample, n_transforms)

    def variance_ratio(self, sample, cand_length):
        vote_counter = torch.zeros(sample["uncert_0"].size(0))
        for i in range(cand_length):
            top_class = int(torch.argmin(sample[f"uncert_{i}"]))  # uncert argmin.
            vote_counter[top_class] += 1
        assert vote_counter.sum() == cand_length
        sample["uncertainty"] = (1 - vote_counter.max() / cand_length).item()

    def equal_class_sampling(self, samples, num_class):
        mem_per_cls = self.memory_size // num_class
        sample_df = pd.DataFrame(samples)
        # Warning: assuming the classes were ordered following task number.
        ret = []
        for y in range(self.num_learning_class):
            cls_df = sample_df[sample_df["label"] == y]
            ret += cls_df.sample(n=min(mem_per_cls, len(cls_df))).to_dict(
                orient="records"
            )

        num_rest_slots = self.memory_size - len(ret)
        if num_rest_slots > 0:
            logger.warning("Fill the unused slots by breaking the equilibrium.")
            ret += (
                sample_df[~sample_df.file_name.isin(pd.DataFrame(ret).file_name)]
                .sample(n=num_rest_slots)
                .to_dict(orient="records")
            )

        num_dups = pd.DataFrame(ret).file_name.duplicated().sum()
        if num_dups > 0:
            logger.warning(f"Duplicated samples in memory: {num_dups}")

        return ret

    def measure_time(self, model, input):
            # Record the start time
            self.start_event.record()

            # Forward pass
            output = model(input)

            # Record the end time
            self.end_event.record()

            # Wait for the events to complete
            torch.cuda.synchronize()

            # Compute the elapsed time
            elapsed_time = self.start_event.elapsed_time(self.end_event)

            return elapsed_time, output

the only other file that I have changed is the utils/train_utils.py and here is the changed version:

"""
rainbow-memory
Copyright 2021-present NAVER Corp.
GPLv3
"""
import torch_optimizer
from easydict import EasyDict as edict
from torch import optim

from models import mnist, cifar, imagenet
import pdb

def select_optimizer(opt_name, lr, weight_decay, model, sched_name="cos"):
    
    if opt_name == "adam":
        opt = optim.Adam(model.parameters(), lr=lr, weight_decay=1e-6)
    elif opt_name == "radam":
        opt = torch_optimizer.RAdam(model.parameters(), lr=lr, weight_decay=0.00001)
    elif opt_name == "sgd":
        opt = optim.SGD(
            model.parameters(), lr=lr, momentum=0.9, nesterov=True, weight_decay=weight_decay
        )
    else:
        raise NotImplementedError("Please select the opt_name [adam, sgd]")

    if sched_name == "cos":
        scheduler = optim.lr_scheduler.CosineAnnealingWarmRestarts(
            opt, T_0=1, T_mult=2, eta_min=lr * 0.01
        )
    elif sched_name == "anneal":
        scheduler = optim.lr_scheduler.ExponentialLR(opt, 1 / 1.1, last_epoch=-1)
    elif sched_name == "multistep":
        scheduler = optim.lr_scheduler.MultiStepLR(
            opt, milestones=[30, 60, 80, 90], gamma=0.1
        )
    else:
        raise NotImplementedError(
            "Please select the sched_name [cos, anneal, multistep]"
        )

    return opt, scheduler


def select_model(model_name, dataset, num_classes=None):
    opt = edict(
        {
            "depth": 18,
            "num_classes": num_classes,
            "in_channels": 3,
            "bn": True,
            "normtype": "BatchNorm",
            "activetype": "ReLU",
            "pooltype": "MaxPool2d",
            "preact": False,
            "affine_bn": True,
            "bn_eps": 1e-6,
            "compression": 0.5,
        }
    )

    if "mnist" in dataset:
        model_class = getattr(mnist, "MLP")
    elif "cifar" in dataset:
        model_class = getattr(cifar, "ResNet")
    elif "imagenet" in dataset:
        model_class = getattr(imagenet, "ResNet")
    else:
        raise NotImplementedError(
            "Please select the appropriate datasets (mnist, cifar10, cifar100, imagenet)"
        )

    if model_name == "resnet18":
        opt["depth"] = 18
    elif model_name == "resnet32":
        opt["depth"] = 32
    elif model_name == "resnet34":
        opt["depth"] = 34
    elif model_name == "mlp400":
        opt["width"] = 400
    else:
        raise NotImplementedError(
            "Please choose the model name in [resnet18, resnet32, resnet34]"
        )

    model = model_class(opt)

    return model

and I get the error mentioned above. The roor above is produced because I set RAY_PICKLE_VERBOSE_DEBUG=1. If it is not on, I get the following error:

Traceback (most recent call last):
  File "/visinf/home/shamidi/Projects/rainbow-memory/main.py", line 293, in <module>
    main()
  File "/visinf/home/shamidi/Projects/rainbow-memory/main.py", line 180, in main
    result = tune.run(
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/tune.py", line 520, in run
    experiments[i] = Experiment(
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/experiment/experiment.py", line 163, in __init__
    self._run_identifier = Experiment.register_if_needed(run)
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/experiment/experiment.py", line 365, in register_if_needed
    raise type(e)(str(e) + " " + extra_msg) from None
TypeError: ray.cloudpickle.dumps(<class 'ray.tune.trainable.function_trainable.wrap_function.<locals>.ImplicitFunc'>) failed.
To check which non-serializable variables are captured in scope, re-run the ray script with 'RAY_PICKLE_VERBOSE_DEBUG=1'. Other options: 
-Try reproducing the issue by calling `pickle.dumps(trainable)`. 
-If the error is typing-related, try removing the type annotations and try again.
(first_env) shamidi@VISINF.LOCAL@node12:~/Projects/rainbow-memory$ ^C
(first_env) shamidi@VISINF.LOCAL@node12:~/Projects/rainbow-memory$ 

I appreciate any guidance :slight_smile:

can you try if things run without the cuda Events?

self.start_event = torch.cuda.Event(enable_timing=True)
self.end_event = torch.cuda.Event(enable_timing=True)

Thank you for the answer. I removed the event and the previous error has disappeared. However, I get a strange error now:

Failure # 1 (occurred at 2023-05-21_14-04-20)
^[[36mray::ImplicitFunc.train()^[[39m (pid=1316511, ip=130.83.94.32, repr=func)
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/trainable.py", line 347, in train
    result = self.step()
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 417, in step
    self._report_thread_runner_error(block=True)
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 589, in _report_thread_runner_error
    raise e
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 289, in run
    self._entrypoint()
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 362, in entrypoint
    return self._trainable_func(
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 684, in _trainable_func
    output = fn()
  File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/methods/rainbow_memory.py", line 164, in find_hyperparametrs
    train_loss, train_acc = self._train(train_loader, memory_loader, optimizer=self.optimizer, criterion=self.criterion)
  File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/methods/rainbow_memory.py", line 504, in _train
    for i, data in enumerate(tqdm(data_iterator)):
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/tqdm/_tqdm.py", line 1000, in __iter__
    for obj in iterable:
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
    data = self._next_data()
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
    return self._process_data(data)
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
    data.reraise()
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/_utils.py", line 461, in reraise
    raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/utils/data_loader.py", line 38, in __getitem__
    image = PIL.Image.open(img_path).convert("RGB")
  File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/PIL/Image.py", line 3092, in open
    fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'dataset/cifar10/train/automobile/4746.png'

When I disable ray.tune() and run the normal code, the dataloaders work perfectly fine. Because of this error, all the trials fail:

== Status ==
Current time: 2023-05-21 17:56:02 (running for 00:00:07.31)
Memory usage on this node: 11.5/125.8 GiB
Using AsyncHyperBand: num_stopped=0
Bracket: Iter 80.000: None | Iter 40.000: None | Iter 20.000: None | Iter 10.000: None | Iter 5.000: None
Resources requested: 24.0/24 CPUs, 0.5/1 GPUs, 0.0/75.52 GiB heap, 0.0/36.36 GiB objects (0.0/1.0 accelerator_type:TITAN)
Result logdir: /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54
Number of trials: 1/4 (1 RUNNING)
+------------------------------+----------+----------------------+-----------+----------------+
| Trial name                   | status   | loc                  |        lr |   weight_decay |
|------------------------------+----------+----------------------+-----------+----------------|
| find_hyperparametrs_f8047db8 | RUNNING  | 130.83.94.32:1340521 | 0.0274382 |     0.00940278 |
+------------------------------+----------+----------------------+-----------+----------------+


  0%|          | 0/38 [00:00<?, ?it/s]
(find_hyperparametrs pid=1340521) 2023-05-21 17:56:02,324       ERROR function_trainable.py:298 -- Runner Thread raised error.
(find_hyperparametrs pid=1340521) Traceback (most recent call last):
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 289, in run
(find_hyperparametrs pid=1340521)     self._entrypoint()
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 362, in entrypoint
(find_hyperparametrs pid=1340521)     return self._trainable_func(
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/util/tracing/tracing_helper.py", line 466, in _resume_span
(find_hyperparametrs pid=1340521)     return method(self, *_args, **_kwargs)
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 684, in _trainable_func
(find_hyperparametrs pid=1340521)     output = fn()
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/methods/rainbow_memory.py", line 165, in find_hyperparametrs
(find_hyperparametrs pid=1340521)     train_loss, train_acc = self._train(train_loader=train_loader, memory_loader=memory_loader, optimizer=self.optimizer, criterion=self.criterion)
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/methods/rainbow_memory.py", line 509, in _train
(find_hyperparametrs pid=1340521)     for i, data in enumerate(tqdm(data_iterator)):
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/tqdm/_tqdm.py", line 1000, in __iter__
(find_hyperparametrs pid=1340521)     for obj in iterable:
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
(find_hyperparametrs pid=1340521)     data = self._next_data()
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
(find_hyperparametrs pid=1340521)     return self._process_data(data)
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
(find_hyperparametrs pid=1340521)     data.reraise()
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/_utils.py", line 461, in reraise
(find_hyperparametrs pid=1340521)     raise exception
(find_hyperparametrs pid=1340521) FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
(find_hyperparametrs pid=1340521) Original Traceback (most recent call last):
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
(find_hyperparametrs pid=1340521)     data = fetcher.fetch(index)
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
(find_hyperparametrs pid=1340521)     data = [self.dataset[idx] for idx in possibly_batched_index]
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
(find_hyperparametrs pid=1340521)     data = [self.dataset[idx] for idx in possibly_batched_index]
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/utils/data_loader.py", line 38, in __getitem__
(find_hyperparametrs pid=1340521)     image = PIL.Image.open(img_path).convert("RGB")
(find_hyperparametrs pid=1340521)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/PIL/Image.py", line 3092, in open
(find_hyperparametrs pid=1340521)     fp = builtins.open(filename, "rb")
(find_hyperparametrs pid=1340521) FileNotFoundError: [Errno 2] No such file or directory: 'dataset/cifar10/train/automobile/1487.png'
(find_hyperparametrs pid=1340521) 
Result for find_hyperparametrs_f8047db8:
  date: 2023-05-21_17-56-02
  experiment_id: aa78f714932c4103af1941e33d1001e1
  hostname: node12
  node_ip: 130.83.94.32
  pid: 1340521
  timestamp: 1684684562
  trial_id: f8047db8
  
== Status ==
Current time: 2023-05-21 17:56:02 (running for 00:00:07.81)
Memory usage on this node: 11.6/125.8 GiB
Using AsyncHyperBand: num_stopped=0
Bracket: Iter 80.000: None | Iter 40.000: None | Iter 20.000: None | Iter 10.000: None | Iter 5.000: None
Resources requested: 0/24 CPUs, 0/1 GPUs, 0.0/75.52 GiB heap, 0.0/36.36 GiB objects (0.0/1.0 accelerator_type:TITAN)
Result logdir: /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54
Number of trials: 2/4 (1 ERROR, 1 PENDING)
+------------------------------+----------+----------------------+------------+----------------+
| Trial name                   | status   | loc                  |         lr |   weight_decay |
|------------------------------+----------+----------------------+------------+----------------|
| find_hyperparametrs_fc6102dc | PENDING  |                      | 0.00907818 |     0.0171148  |
| find_hyperparametrs_f8047db8 | ERROR    | 130.83.94.32:1340521 | 0.0274382  |     0.00940278 |
+------------------------------+----------+----------------------+------------+----------------+
Number of errored trials: 1
+------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                   |   # failures | error file                                                                                                                                                          |
|------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| find_hyperparametrs_f8047db8 |            1 | /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54/find_hyperparametrs_f8047db8_1_lr=0.0274,weight_decay=0.0094_2023-05-21_17-55-55/error.txt |
+------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+

  0%|          | 0/38 [00:00<?, ?it/s]
Result for find_hyperparametrs_fc6102dc:
  date: 2023-05-21_17-56-08
  experiment_id: 11666c7f06e74fcaa53006ee31a7d773
  hostname: node12
  node_ip: 130.83.94.32
  pid: 1340725
  timestamp: 1684684568
  trial_id: fc6102dc
  
(find_hyperparametrs pid=1340725) 2023-05-21 17:56:08,527       ERROR function_trainable.py:298 -- Runner Thread raised error.
(find_hyperparametrs pid=1340725) Traceback (most recent call last):
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 289, in run
(find_hyperparametrs pid=1340725)     self._entrypoint()
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 362, in entrypoint
(find_hyperparametrs pid=1340725)     return self._trainable_func(
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/util/tracing/tracing_helper.py", line 466, in _resume_span
(find_hyperparametrs pid=1340725)     return method(self, *_args, **_kwargs)
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 684, in _trainable_func
(find_hyperparametrs pid=1340725)     output = fn()
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/methods/rainbow_memory.py", line 165, in find_hyperparametrs
(find_hyperparametrs pid=1340725)     train_loss, train_acc = self._train(train_loader=train_loader, memory_loader=memory_loader, optimizer=self.optimizer, criterion=self.criterion)
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/methods/rainbow_memory.py", line 509, in _train
(find_hyperparametrs pid=1340725)     for i, data in enumerate(tqdm(data_iterator)):
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/tqdm/_tqdm.py", line 1000, in __iter__
(find_hyperparametrs pid=1340725)     for obj in iterable:
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
(find_hyperparametrs pid=1340725)     data = self._next_data()
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
(find_hyperparametrs pid=1340725)     return self._process_data(data)
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
(find_hyperparametrs pid=1340725)     data.reraise()
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/_utils.py", line 461, in reraise
(find_hyperparametrs pid=1340725)     raise exception
(find_hyperparametrs pid=1340725) FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
(find_hyperparametrs pid=1340725) Original Traceback (most recent call last):
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
(find_hyperparametrs pid=1340725)     data = fetcher.fetch(index)
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
(find_hyperparametrs pid=1340725)     data = [self.dataset[idx] for idx in possibly_batched_index]
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
(find_hyperparametrs pid=1340725)     data = [self.dataset[idx] for idx in possibly_batched_index]
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/utils/data_loader.py", line 38, in __getitem__
(find_hyperparametrs pid=1340725)     image = PIL.Image.open(img_path).convert("RGB")
(find_hyperparametrs pid=1340725)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/PIL/Image.py", line 3092, in open
(find_hyperparametrs pid=1340725)     fp = builtins.open(filename, "rb")
(find_hyperparametrs pid=1340725) FileNotFoundError: [Errno 2] No such file or directory: 'dataset/cifar10/train/truck/4032.png'
(find_hyperparametrs pid=1340725) 
== Status ==
Current time: 2023-05-21 17:56:08 (running for 00:00:13.76)
Memory usage on this node: 11.7/125.8 GiB
Using AsyncHyperBand: num_stopped=0
Bracket: Iter 80.000: None | Iter 40.000: None | Iter 20.000: None | Iter 10.000: None | Iter 5.000: None
Resources requested: 0/24 CPUs, 0/1 GPUs, 0.0/75.52 GiB heap, 0.0/36.36 GiB objects (0.0/1.0 accelerator_type:TITAN)
Result logdir: /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54
Number of trials: 3/4 (2 ERROR, 1 PENDING)
+------------------------------+----------+----------------------+------------+----------------+
| Trial name                   | status   | loc                  |         lr |   weight_decay |
|------------------------------+----------+----------------------+------------+----------------|
| find_hyperparametrs_fd1c8da4 | PENDING  |                      | 0.0388599  |     0.0210795  |
| find_hyperparametrs_f8047db8 | ERROR    | 130.83.94.32:1340521 | 0.0274382  |     0.00940278 |
| find_hyperparametrs_fc6102dc | ERROR    | 130.83.94.32:1340725 | 0.00907818 |     0.0171148  |
+------------------------------+----------+----------------------+------------+----------------+
Number of errored trials: 2
+------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                   |   # failures | error file                                                                                                                                                          |
|------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| find_hyperparametrs_f8047db8 |            1 | /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54/find_hyperparametrs_f8047db8_1_lr=0.0274,weight_decay=0.0094_2023-05-21_17-55-55/error.txt |
| find_hyperparametrs_fc6102dc |            1 | /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54/find_hyperparametrs_fc6102dc_2_lr=0.0091,weight_decay=0.0171_2023-05-21_17-56-02/error.txt |
+------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+

  0%|          | 0/38 [00:00<?, ?it/s]
(find_hyperparametrs pid=1341062) 2023-05-21 17:56:14,724       ERROR function_trainable.py:298 -- Runner Thread raised error.
Result for find_hyperparametrs_fd1c8da4:
(find_hyperparametrs pid=1341062) Traceback (most recent call last):
  date: 2023-05-21_17-56-14
  experiment_id: 5db3bb4ace6e4037ab409c00aed1c64b
  hostname: node12
  node_ip: 130.83.94.32
  pid: 1341062
  timestamp: 1684684574
  trial_id: fd1c8da4
  (find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 289, in run

(find_hyperparametrs pid=1341062)     self._entrypoint()
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 362, in entrypoint
(find_hyperparametrs pid=1341062)     return self._trainable_func(
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/util/tracing/tracing_helper.py", line 466, in _resume_span
(find_hyperparametrs pid=1341062)     return method(self, *_args, **_kwargs)
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 684, in _trainable_func
(find_hyperparametrs pid=1341062)     output = fn()
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/methods/rainbow_memory.py", line 165, in find_hyperparametrs
(find_hyperparametrs pid=1341062)     train_loss, train_acc = self._train(train_loader=train_loader, memory_loader=memory_loader, optimizer=self.optimizer, criterion=self.criterion)
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/methods/rainbow_memory.py", line 509, in _train
(find_hyperparametrs pid=1341062)     for i, data in enumerate(tqdm(data_iterator)):
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/tqdm/_tqdm.py", line 1000, in __iter__
(find_hyperparametrs pid=1341062)     for obj in iterable:
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
(find_hyperparametrs pid=1341062)     data = self._next_data()
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
(find_hyperparametrs pid=1341062)     return self._process_data(data)
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
(find_hyperparametrs pid=1341062)     data.reraise()
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/_utils.py", line 461, in reraise
(find_hyperparametrs pid=1341062)     raise exception
(find_hyperparametrs pid=1341062) FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
(find_hyperparametrs pid=1341062) Original Traceback (most recent call last):
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
(find_hyperparametrs pid=1341062)     data = fetcher.fetch(index)
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
(find_hyperparametrs pid=1341062)     data = [self.dataset[idx] for idx in possibly_batched_index]
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
(find_hyperparametrs pid=1341062)     data = [self.dataset[idx] for idx in possibly_batched_index]
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/Projects/rainbow-memory-Bayesian/utils/data_loader.py", line 38, in __getitem__
(find_hyperparametrs pid=1341062)     image = PIL.Image.open(img_path).convert("RGB")
(find_hyperparametrs pid=1341062)   File "/visinf/home/shamidi/anaconda3_new/envs/first_env/lib/python3.9/site-packages/PIL/Image.py", line 3092, in open
(find_hyperparametrs pid=1341062)     fp = builtins.open(filename, "rb")
(find_hyperparametrs pid=1341062) FileNotFoundError: [Errno 2] No such file or directory: 'dataset/cifar10/train/automobile/2267.png'
(find_hyperparametrs pid=1341062) 
== Status ==
Current time: 2023-05-21 17:56:14 (running for 00:00:19.94)
Memory usage on this node: 11.7/125.8 GiB
Using AsyncHyperBand: num_stopped=0
Bracket: Iter 80.000: None | Iter 40.000: None | Iter 20.000: None | Iter 10.000: None | Iter 5.000: None
Resources requested: 0/24 CPUs, 0/1 GPUs, 0.0/75.52 GiB heap, 0.0/36.36 GiB objects (0.0/1.0 accelerator_type:TITAN)
Result logdir: /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54
Number of trials: 4/4 (3 ERROR, 1 PENDING)
+------------------------------+----------+----------------------+------------+----------------+
| Trial name                   | status   | loc                  |         lr |   weight_decay |
|------------------------------+----------+----------------------+------------+----------------|
| find_hyperparametrs_00b9c760 | PENDING  |                      | 0.0176559  |     0.0310831  |
| find_hyperparametrs_f8047db8 | ERROR    | 130.83.94.32:1340521 | 0.0274382  |     0.00940278 |
| find_hyperparametrs_fc6102dc | ERROR    | 130.83.94.32:1340725 | 0.00907818 |     0.0171148  |
| find_hyperparametrs_fd1c8da4 | ERROR    | 130.83.94.32:1341062 | 0.0388599  |     0.0210795  |
+------------------------------+----------+----------------------+------------+----------------+
Number of errored trials: 3
+------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                   |   # failures | error file                                                                                                                                                          |
|------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| find_hyperparametrs_f8047db8 |            1 | /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54/find_hyperparametrs_f8047db8_1_lr=0.0274,weight_decay=0.0094_2023-05-21_17-55-55/error.txt |
| find_hyperparametrs_fc6102dc |            1 | /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54/find_hyperparametrs_fc6102dc_2_lr=0.0091,weight_decay=0.0171_2023-05-21_17-56-02/error.txt |
| find_hyperparametrs_fd1c8da4 |            1 | /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54/find_hyperparametrs_fd1c8da4_3_lr=0.0389,weight_decay=0.0211_2023-05-21_17-56-09/error.txt |
+------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+

== Status ==
Current time: 2023-05-21 17:56:20 (running for 00:00:25.71)
Memory usage on this node: 11.7/125.8 GiB
Using AsyncHyperBand: num_stopped=0
Bracket: Iter 80.000: None | Iter 40.000: None | Iter 20.000: None | Iter 10.000: None | Iter 5.000: None
Resources requested: 24.0/24 CPUs, 0.5/1 GPUs, 0.0/75.52 GiB heap, 0.0/36.36 GiB objects (0.0/1.0 accelerator_type:TITAN)
Result logdir: /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54
Number of trials: 4/4 (3 ERROR, 1 RUNNING)
+------------------------------+----------+----------------------+------------+----------------+
| Trial name                   | status   | loc                  |         lr |   weight_decay |
|------------------------------+----------+----------------------+------------+----------------|
| find_hyperparametrs_00b9c760 | RUNNING  | 130.83.94.32:1341333 | 0.0176559  |     0.0310831  |
| find_hyperparametrs_f8047db8 | ERROR    | 130.83.94.32:1340521 | 0.0274382  |     0.00940278 |
| find_hyperparametrs_fc6102dc | ERROR    | 130.83.94.32:1340725 | 0.00907818 |     0.0171148  |
| find_hyperparametrs_fd1c8da4 | ERROR    | 130.83.94.32:1341062 | 0.0388599  |     0.0210795  |
+------------------------------+----------+----------------------+------------+----------------+
Number of errored trials: 3
+------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                   |   # failures | error file                                                                                                                                                          |
|------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| find_hyperparametrs_f8047db8 |            1 | /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54/find_hyperparametrs_f8047db8_1_lr=0.0274,weight_decay=0.0094_2023-05-21_17-55-55/error.txt |
| find_hyperparametrs_fc6102dc |            1 | /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54/find_hyperparametrs_fc6102dc_2_lr=0.0091,weight_decay=0.0171_2023-05-21_17-56-02/error.txt |
| find_hyperparametrs_fd1c8da4 |            1 | /visinf/home/shamidi/ray_results/find_hyperparametrs_2023-05-21_17-55-54/find_hyperparametrs_fd1c8da4_3_lr=0.0389,weight_decay=0.0211_2023-05-21_17-56-09/error.txt |
+------------------------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+

 

I think there is a bug somewhere and it manifests itself in the form of the current dataloader errors but I don;t know where it is. The dataloader and the related path to the dataset look okay to me. I appreciate any guidance.

I think the error is:

FileNotFoundError: [Errno 2] No such file or directory: ‘dataset/cifar10/train/automobile/4746.png’

yes, but the file and directory exist. And I have no problem working with my data_loader when ray.tune() is not used.

This is my loader:

"""
rainbow-memory
Copyright 2021-present NAVER Corp.
GPLv3
"""
import logging.config
import os
from typing import List

import PIL
import numpy as np
import pandas as pd
import torch
torch.use_deterministic_algorithms(True, warn_only=True)
from torch.utils.data import Dataset
import pdb
logger = logging.getLogger()


class ImageDataset(Dataset):
    def __init__(self, data_frame: pd.DataFrame, dataset: str, transform=None):
        self.data_frame = data_frame
        self.dataset = dataset
        self.transform = transform

    def __len__(self):
        return len(self.data_frame)

    def __getitem__(self, idx):
        sample = dict()
        if torch.is_tensor(idx):
            idx = idx.tolist()

        img_name = self.data_frame.iloc[idx]["file_name"]
        label = self.data_frame.iloc[idx].get("label", -1)

        img_path = os.path.join("dataset", self.dataset, img_name)
        image = PIL.Image.open(img_path).convert("RGB")
        if self.transform:
            image = self.transform(image)
        sample["image"] = image
        sample["label"] = label
        sample["image_name"] = img_name
        return sample

    def get_image_class(self, y):
        return self.data_frame[self.data_frame["label"] == y]



# ToDo: the address here needs to change: 4500 size 
def get_train_datalist(args, cur_iter: int) -> List:
    if args.mode == "joint":
        datalist = []
        for cur_iter_ in range(args.n_tasks):
            collection_name = get_train_collection_name(
                dataset=args.dataset,
                exp=args.exp_name,
                rnd=args.rnd_seed,
                n_cls=args.n_cls_a_task,
                iter=cur_iter_,
            )
            #f"collections/{args.dataset}/{collection_name}.json"
            datalist += pd.read_json( 
                f"collections/{args.dataset}/{collection_name}.json"    
            ).to_dict(orient="records")
            logger.info(f"[Train] Get datalist from {collection_name}.json")
    else:
        collection_name = get_train_collection_name(
            dataset=args.dataset,
            exp=args.exp_name,
            rnd=args.rnd_seed,
            n_cls=args.n_cls_a_task,
            iter=cur_iter,
        )
        # f"collections/{args.dataset}/{collection_name}.json"
        datalist = pd.read_json(
            f"collections/{args.dataset}/{collection_name}.json"
        ).to_dict(orient="records")
        logger.info(f"[Train] Get datalist from {collection_name}.json")

    return datalist


def get_train_collection_name(dataset, exp, rnd, n_cls, iter):
    collection_name = "{dataset}_train_{exp}_rand{rnd}_cls{n_cls}_task{iter}".format(
        dataset=dataset, exp=exp, rnd=rnd, n_cls=n_cls, iter=iter
    )
    return collection_name


def get_test_datalist(args, exp_name: str, cur_iter: int) -> List:
    if exp_name is None:
        exp_name = args.exp_name

    if exp_name in ["joint", "blurry10", "blurry30"]:
        # merge over all tasks
        tasks = list(range(args.n_tasks))
    elif exp_name == "disjoint":
        # merge current and all previous tasks
        tasks = list(range(cur_iter + 1))
    else:
        raise NotImplementedError

    datalist = []
    for iter_ in tasks:
        collection_name = "{dataset}_test_rand{rnd}_cls{n_cls}_task{iter}".format(
            dataset=args.dataset, rnd=args.rnd_seed, n_cls=args.n_cls_a_task, iter=iter_
        )
        datalist += pd.read_json(
            f"collections/{args.dataset}/{collection_name}.json"
        ).to_dict(orient="records")
        logger.info(f"[Test ] Get datalist from {collection_name}.json")

    return datalist

# ToDo: add valid datalist
def get_valid_datalist(args, exp_name: str, cur_iter: int) -> List:
    if exp_name is None:
        exp_name = args.exp_name

    if exp_name in ["joint", "blurry10", "blurry30"]:
        # merge over all tasks
        tasks = list(range(args.n_tasks))
    elif exp_name == "disjoint":
        # merge current and all previous tasks
        tasks = list(range(cur_iter + 1))
    else:
        raise NotImplementedError

    datalist = []
    for iter_ in tasks:
        collection_name = "{dataset}_valid_rand{rnd}_cls{n_cls}_task{iter}".format(
            dataset=args.dataset, rnd=args.rnd_seed, n_cls=args.n_cls_a_task, iter=iter_
        )
        datalist += pd.read_json(
            f"collections/{args.dataset}/{collection_name}.json"
        ).to_dict(orient="records")
        logger.info(f"[Test ] Get datalist from {collection_name}.json")

    return datalist



def get_statistics(dataset: str):
    """
    Returns statistics of the dataset given a string of dataset name. To add new dataset, please add required statistics here
    """
    assert dataset in [
        "mnist",
        "KMNIST",
        "EMNIST",
        "FashionMNIST",
        "SVHN",
        "cifar10",
        "cifar100",
        "CINIC10",
        "imagenet100",
        "imagenet1000",
        "TinyImagenet",
        "cub200",
    ]
    mean = {
        "mnist": (0.1307,),
        "KMNIST": (0.1307,),
        "EMNIST": (0.1307,),
        "FashionMNIST": (0.1307,),
        "SVHN": (0.4377, 0.4438, 0.4728),
        "cifar10": (0.4914, 0.4822, 0.4465),
        "cifar100": (0.5071, 0.4867, 0.4408),
        "CINIC10": (0.47889522, 0.47227842, 0.43047404),
        "TinyImagenet": (0.4802, 0.4481, 0.3975),
        "imagenet100": (0.485, 0.456, 0.406),
        "imagenet1000": (0.485, 0.456, 0.406),
        "cub200": (0.485, 0.456, 0.406),
    }

    std = {
        "mnist": (0.3081,),
        "KMNIST": (0.3081,),
        "EMNIST": (0.3081,),
        "FashionMNIST": (0.3081,),
        "SVHN": (0.1969, 0.1999, 0.1958),
        "cifar10": (0.2023, 0.1994, 0.2010),
        "cifar100": (0.2675, 0.2565, 0.2761),
        "CINIC10": (0.24205776, 0.23828046, 0.25874835),
        "TinyImagenet": (0.2302, 0.2265, 0.2262),
        "imagenet100": (0.229, 0.224, 0.225),
        "imagenet1000": (0.229, 0.224, 0.225),
        "cub200": (0.229, 0.224, 0.225),
    }

    classes = {
        "mnist": 10,
        "KMNIST": 10,
        "EMNIST": 49,
        "FashionMNIST": 10,
        "SVHN": 10,
        "cifar10": 10,
        "cifar100": 100,
        "CINIC10": 10,
        "TinyImagenet": 200,
        "imagenet100": 100,
        "imagenet1000": 1000,
        "cub200": 180,
    }

    in_channels = {
        "mnist": 1,
        "KMNIST": 1,
        "EMNIST": 1,
        "FashionMNIST": 1,
        "SVHN": 3,
        "cifar10": 3,
        "cifar100": 3,
        "CINIC10": 3,
        "TinyImagenet": 3,
        "imagenet100": 3,
        "imagenet1000": 3,
        "cub200": 3,
    }

    inp_size = {
        "mnist": 28,
        "KMNIST": 28,
        "EMNIST": 28,
        "FashionMNIST": 28,
        "SVHN": 32,
        "cifar10": 32,
        "cifar100": 32,
        "CINIC10": 32,
        "TinyImagenet": 64,
        "imagenet100": 224,
        "imagenet1000": 224,
        "cub200": 224,
    }
    return (
        mean[dataset],
        std[dataset],
        classes[dataset],
        inp_size[dataset],
        in_channels[dataset],
    )


# from https://github.com/drimpossible/GDumb/blob/74a5e814afd89b19476cd0ea4287d09a7df3c7a8/src/utils.py#L102:5
def cutmix_data(x, y, alpha=1.0, cutmix_prob=0.5):
    assert alpha > 0
    # generate mixed sample
    lam = np.random.beta(alpha, alpha)

    batch_size = x.size()[0]
    index = torch.randperm(batch_size)

    if torch.cuda.is_available():
        index = index.cuda()

    y_a, y_b = y, y[index]
    bbx1, bby1, bbx2, bby2 = rand_bbox(x.size(), lam)
    x[:, :, bbx1:bbx2, bby1:bby2] = x[index, :, bbx1:bbx2, bby1:bby2]

    # adjust lambda to exactly match pixel ratio
    lam = 1 - ((bbx2 - bbx1) * (bby2 - bby1) / (x.size()[-1] * x.size()[-2]))
    return x, y_a, y_b, lam


def rand_bbox(size, lam):
    W = size[2]
    H = size[3]
    cut_rat = np.sqrt(1.0 - lam)
    cut_w = np.int(W * cut_rat)
    cut_h = np.int(H * cut_rat)

    # uniform
    cx = np.random.randint(W)
    cy = np.random.randint(H)

    bbx1 = np.clip(cx - cut_w // 2, 0, W)
    bby1 = np.clip(cy - cut_h // 2, 0, H)
    bbx2 = np.clip(cx + cut_w // 2, 0, W)
    bby2 = np.clip(cy + cut_h // 2, 0, H)

    return bbx1, bby1, bbx2, bby2

I also tried putting the data_loader.py in the same subfloder as the trainable but it didn’t help.

how many nodes do you have in your cluster?
remember that Tune may ship your trainable to a different node of the cluster.