How to optimise noisy object function?

My object function is quite noisy and can give variations of around 4% in final score across multiple runs of identical hyperparameters. Do all h-param search algorithms tend to implicitly assume consistency and a stable mapping between h-params and score? If not, what might be suitable?

I’m thinking I will wrap my object function in another function that will run 10 trials and report the lowest or the mean score and then try to optimise using bayesian search. Is there a common strategy or solution to this sort of scenario?

You can try using the Repeater wrapper which will automatically manage the multiple runs + averaging step for you.

Alternatively, I’ve heard AxSearch can be coerced to work really well in those settings.

Thanks, I ended up directly using Ax, in case anybody else reads this, it looks like Ax and BOHB optimisers (and probably most except for simple naive Bayes?) do assume that the object function is noisy so probably more efficient to not use Repeater/averaging and just give the algorithm all of the data and let it do its thing.