Pass list of configurations as search space to tune.run

Is there an elegant way of passing a list of configurations to tune.run? E.g. something like

configs = [{"a": "1", "b": "2"}, {"a": "3", "b": "3"}, {"a": "4", "b": "4"}]
tune.run(my_fun, config=configs)

I understand that the search space API provides a way of e.g. random sampling and grid searching. In my case however, it would be very useful to precisely specify a number of configurations and use tune.run to test all of them for multiple seeds.

I assume there is no solution to this problem in ray tune currently?

@sven1977 could you take a look at this?

Any updates here? This would still be extremely useful for me.

Should I open a feature request somewhere?

Sorry for the slow reply! Some messages can get lost.

You should be able to do this via points_to_evaluate: Search Algorithms (tune.suggest) — Ray v2.0.0.dev0

You can also use tune.grid_search to do a seeded evaluation:

tune.run(config={
    "values": tune.grid_search([cfg1, cfg2, cfg3]), 
    "seed": tune.grid_search([1,2,3])
})

Thank you for your reply, it’s great news that this is already possible!

I am not sure I understand how to properly combine both of your suggestions to repeat a number of configurations specified in a list multiple times. A hacky way of doing it would be

from ray import tune
from ray.tune.suggest.basic_variant import BasicVariantGenerator

cfg = [
    {"a": 2, "b": 2},
    {"a": 1, "b": 1},
    {"a": 1, "b": 2}
]

tune.run(
    lambda config: config["a"] + config["b"],
    config={key: None for key in cfg[0]},
    search_alg=BasicVariantGenerator(points_to_evaluate=num_samples * cfg),
    num_samples=num_samples * len(cfg))

Is there a cleaner way that doesn’t require multiplying the list of configurations num_samples times?

Oddly enough, this one works:

tune.run(some_function, config=tune.grid_search([config0, config1, config2]))

and I really wonder why. If someone does know, could he answer me the corresponding question here?