Bert population based hyperparameter tuning with huggingface and raytune

I have implemented the code to perform hyperparameter tuning with huggingface and rayTune for BERT. The execution was successful, and I could see the status of all the trails. I was trying to use the best trail suggested after the hyper-parameter tuning using pbt_policy_train____.txt but I couldn’t figure it out how to achieve this with huggingface and rayTune.

Any suggestions will be really appreciated.

Hi @Prasanth_vaidya, PBT gives you a hyperparameter schedule so not a single configuration. If you wanted to replay the same training again on the same trainable, you could use our population based training replay utility.

The main use-case for PBT is to get a well-trained model at the end and use it in downstream tasks. If you share what you’d like to achieve that could be helpful!

Hi @kai thanks for your response. I have used PBT for hyperparameter tuning on BERT for a multi-class text classification problem using huggingface. I have got the best model for a trail and I have used the population based training replay utility but I couldn’t replicate the experiment, with the huggingface trainer

trainer = Trainer(…) (Huggingface)

replay = PopulationBasedTrainingReplay( “hp_tuning_baseline/wand/Anonymized data hptuning/pbt_policy_1f7bf_00010.txt”)

tune.run(trainer, scheduler=replay)