This is my first attempt at using Ray Tune and I am following this walk through. The author uses the method
ray.tune.run to perform some kind of unspecified search. I want to see what else I can do with that method, but I can’t find any documentation for it aside from firing up python, importing it, and running
- A search of the Ray documentation returns a number of results with similar syntax, especially
tune.run_experiment, but that isn’t what I’m looking for.
- All of the examples on the Ray website that I have seen use
Does anyone have a link to the documentation on
tune.run handy? It seems crazy that I would have to resort to asking in the message board, but here we are.
@foshea There are a ton of examples of Ray AIR which is now the preferred toolkit to use Train/Tune/Serve your ML models.
The Tuner apis has undergone some changes in Ray 2.x as part of RayAIR. It takes in a trainer function, and does HPO for each trial. In particular you’ll see a gallery of examples here:
I hope these resources will guide you, Let us know.
@foshea Here is a link to docs from Ray 1.13 Execution (tune.run, tune.Experiment) — Ray 1.13.0, which should be good enough for you to get started.
Generally, we now recommend the
Tuner API, which has (mostly) the same functionalities and also supports tuning over distributed trainers. See here: Tune Execution (tune.Tuner) — Ray 2.5.0
Thanks @Jules_Damji and @justinvyu. I have since discovered that Ray is incredibly slow on the cluster that I use. This is almost certainly a problem with the cluster, but it still means I had to roll my own grid-search parallel hyperparameter scanner.