One Trainer, multiple Datasets

I want to run an experiment with multiple datasets, searching the best combination of hyperparameters for my model with that dataset.

What would be better, having multiple trainers, one per dataset, or having one trainer with a grid_search parameter which is the dataset.

The second option is not compatible with some search_algorithms like HyperOpt though

It seems like you care about the best hyperparameters per dataset. Launching multiple runs with one dataset per trainer (and tuning over whatever hyperparameters) is what I’d recommend.

Thats what I’m ending up doing.

Btw, If I want to run multiple trainers (not concurrently), I do:

for t in tqdm(trainers):
    t.fit()

But the progress bar is not shown because the trainer interface overrides it. Is it possible to show what’s the “current trainer”?