I am trying to compare the performance of Ray Tune against traditional hyper-parameters tuning using gridSearcCV. Please find the attached image for the results.
I have tried tests 1 and 2 with the least # of hyper-parameters for both cases, the response seems to be convincing. When I increased the # of hyper-parameters for tests 3 and 4, the traditional approach didn’t respond whereas Ray tune finished in less than a minute(faster than its previous run test 2).
Question: Ray Tune response is quick while increasing # of hyper-parameters. Is this expected behavior or random?
Interesting; perhaps this is because tune.choice forces a random sampling, which is different from GridSearchCV which evaluates each and every possible combination.