[Tune] Porting custom algorithm to Ray Tune

Hi, I am looking into porting a custom population based algorithm that I can use as a Searcher within Ray Tune. Do you guys have some template/guidance I can follow for inspiration? Thanks!

Hey @max_ronda- yes there are two ways to go about this.

You could subclass the existing PopulationBasedTraining scheduler and override the necessary methods for your algorithm. This is similar to what we did for implementing Population Based Bandits for example: ray/pb2.py at master · ray-project/ray · GitHub and can be useful if there is a lot of overlap with your algorithm and existing Population Based Training one.

Alternatively, you can just implement the base TrialScheduler interface ray/trial_scheduler.py at master · ray-project/ray · GitHub.

cc @justinvyu

Hi @amogkam , I guess I wasn’t thinking of this as a scheduler problem but more of an optimization problem. I am keen to adapt a framework that contains multiple algorithms, similar to that of Optuna, Dragonfly, Nevergrad… and add that as a Searcher within Ray Tune. The type of optimizers I want to tap into are population based, multi-objective genetic algorithms. So I’d like to set up initial trials, for start of population, let them run through Ray, collect results, let algorithm make selection for next trials, and so on until end of termination criteria… So curious, is there a class I can subclass from to quickly implement this?


Let me know if I am understanding your question correctly. It would help if you can provide more specifics on how your algorithm works.

From what you described it sounds like you want to add a new algorithm to Ray Tune.

This can be done through either the Searcher interface or the TrialScheduler interface.

  1. If your algorithm just needs to suggest new hyperparameters, you can implement a new Ray Tune Searcher by implementing the Searcher interface: ray/searcher.py at master · ray-project/ray · GitHub.
  2. If your algorithm doesn’t just suggest new hyperparameter configurations, but also modifies the actual execution of trials (such as early stopping trials or population based training that modifies the state of an existing trial), then you can implement the TrialScheduler interface that was linked above.

Once you do this, you can plug this into Tune and then run your experiments.

Depending on how exactly your algorithm works, you may need to implement both a new Searcher and a new TrialScheduler.

Hi @amogkam , thanks for the response!

What I am trying to achieve sounds more like #1, a Searcher but population based Searcher. What I am trying to achieve is to take an existing genetic-algorithm that solves constrained optimization problems and fit it in Ray Tune so that I can use it to optimize any implicit or explicit objective function. I am hoping to have Ray distribute each member of the population but leave the GA decide on next members of population. At the moment, not looking for early stopping or very fine control, just have Ray Tune distribute trials and manage checkpointing/logging. I am aware I can do this within the Ray Core but I am keen to adapt within Ray Tune to extend to other algorithms.

Is the Searcher the best class for this ? Does Searcher allow me to submit multiple trials as part of a population ?

Thanks again!

In that case, then yes Searcher sounds like the best approach.

When running Tune, you can then supply the initial hyperparameters that you want and the number of trials you want to run. Once you implement your own Searcher, you can then use it just like any other existing search algorithm.