I would like to point to the Ray Tune python implementation of Bayesian search. At this code given at ray/bayesopt.py at master · ray-project/ray · GitHub .

The Bayesian optimization doesn’t seems to have been integrated fully. As per the documentation given at GitHub - fmfn/BayesianOptimization: A Python implementation of global optimization with gaussian processes., the objective function name should be supplied to the “BayesianOptimization” class. Some thing like this:

optimizer = BayesianOptimization(

f=black_box_function,

pbounds=pbounds,

random_state=1,

)

But in the implementation given at that, the optimization is setup like this:

self.optimizer = byo.BayesianOptimization(

f=None,

pbounds=self._space,

verbose=self._verbose,

random_state=self._random_state) .

Why is " f=None". How to insert the neural network model objective function instead of “None” in the above implementation in Raytune?

Currently, the same value of parameter variable is used in all the iterations of each trial. It should be different as shown in the BayesianOptimization github documentation. The algorithm should search heavily in the red spot area as displayed in the animation of that github page. For that to happen, the value of parameter should change during the iterations of each trial.