I’d like to optimise the hyperparameters (batch size, sequence length) of my dataset as well.
Is it possible to do so with the LightningTrainer and ConfigBuilder or should I use the Vanilla Pytorch with Tune? Would the Vanilla Pytorch also be valid for parallel trials with multiple GPUs?
Also, what platforms should I be using to ask my doubts? I have asked this on github and the slack channel as well.