Using different search strategies for different hyperparameters

How severe does this issue affect your experience of using Ray?

  • None: Just asking a question out of curiosity

Hi there,

I was wondering whether or not it’d be possible to use different search strategies for different hyperparameters in Ray Tune. For instance, would it be possible to tune certain hyperparameters using a random search whereas others are sampled more intelligently using Bayesian optimization?

Since I’ve been unable to find said feature so far, I’ve also been wondering if Ray Tune provides a straightforward manner to integrate such a feature.

Any form of help is greatly appreciated. Thank you in advance.

Hi @Cysto, that’s currently not possible, but it’s an interesting idea. I think the best way to do this would be to support nested Ray Tune runs - which could be possible soon (as we’re revamping the tune execution engine).

At the moment, the best workaround would be to use Bayes Opt in Ray Tune and sample the random parameters directly in the training function using e.g. numpy.random

1 Like

Hi @kai,

Thank you for your swift response and suggestion; this suggestion would indeed work in some cases. It’s also nice to hear it could be possible soon after the tune execution engine has been revamped. Do you by any chance have an ETA on that or some kind of public progress tracking?

Thank you in advance!

It’s hard to put ETAs for general availability here, but I’m working on the refactor right now. It’s quite a large change, so we’ll likely use an opt-in opt-out scheme once it’s available. I think November/December this year is realistic, including thorough testing.