# How to perform data augmentation with RayTune and AxSearch?

How severe does this issue affect your experience of using Ray?

• High: It blocks me to complete my task.

I deal with a lot of optimization problems involving symmetry. One of them is a physics-based simulation that is “unitless” in that if you swap out certain sets of parameters with each other, you end up with the same simulation output. This is a case where I’d like to be able to use data augmentation to report multiple results (one for each of the symmetric trials each with their own parameterizations) after a single evaluation of the objective function.

Does this seem feasible? I’m wondering how low-level I need to get to make this happen. My experience so far has primarily been with `AxSearch`, and it would be great if there’s a way I could accomplish it with the higher-level API, but if not, would like to know how to do it still.

Hey @sgbaird!

So from just a high level understanding, Ray Tune should be flexible enough to support this!

But you’re gonna have to dumb this down for me some more . Do you have a concrete example of what you’re trying to do?

Hi @amogkam, thanks for getting back to me on this! A parameterization might look like:

``````{"mu_1": 20, "mu_2": 300, "mu_3", 50, "std_1": 1, "std_2": 40, "std_3": 100, "frac_1": 0.3, "frac_2": 0.2, "frac_3": 0.5}
``````

for which I know that swapping the indices (e.g. swap “1” and “2” everywhere) is equivalent in terms of the objective output.

While the following might not be the right API for the task, hopefully this helps get the desired functionality across:

``````from itertools import permutations

def evaluate(parameters):
means = np.array([float(parameters[name]) for name in ["mu_1", "mu_2", "mu_3"])
stds = np.array([float(parameters[name]) for name in ["sigma_1", "sigma_2", "sigma_3"])
fractions = np.array([float(parameters[name]) for name in ["frac_1", "frac_2", "frac_3"])

# objective function: volume fraction
vol_frac = get_vol_frac(means, stds, fractions)

# perform data augmentation by swapping indices (which we know maps to the same `vol_frac`
perms = list(permutations([0, 1, 2], 3)) # e.g. [[0, 1, 2], [1, 0, 2], [1, 2, 0], ...]
augmented_means = []
augmented_stds = []
augmented_fracs = []
for perm in perms:
augmented_means.append(means[perm])
augmented_stds.append(stds[perm])
augmented_fracs.append(fracs[perm])

# Q: How to tell the model that the data augmented parameterizations have the same `vol_frac`?
d = {target_name: vol_frac}
report(**d)

algo = AxSearch(ax_client=ax_client)
algo = tune.suggest.ConcurrencyLimiter(algo, max_concurrent=max_parallel)
tune.run(evaluate, num_samples=n_trials, search_alg=algo)
)
``````

Ah got it.
This is really interesting.
So the ask here is to inform Tune about that certain other hp combinations will yield the same result as this combination through `tune.report`. I don’t know a way to achieve the exact thing you are asking.
Alternatively I think this is equivalent to defining a search space to be just a portion of the original one. Do you think it’s doable?

In the most simple case of a 2D search space. You have x1 and x2. Let’s say it’s mirror symmetry on both x1 and x2. So instead of searching the whole 2D plane, you can restrict the search space to be x1 >=0 and x2 >=0. Does that make sense?

Ah got it.
This is really interesting.

@xwjiang2010 thanks! Glad it got across, too.

So the ask here is to inform Tune about that certain other hp combinations will yield the same result as this combination through `tune.report`. I don’t know a way to achieve the exact thing you are asking.

Correct. It sounds then like I’d need to move to some lower-level functionality to make this happen.

Alternatively I think this is equivalent to defining a search space to be just a portion of the original one. Do you think it’s doable?

It’s certainly an alternative, and one that I’m actually implementing right now. In my specific case, setting `sigma_1 < sigma_2 < sigma_3` does the trick in restricting it to a single solution, but I’ve experienced in the past where performing data augmentation actually helps out the model quite a bit compared with the single region since the distinct regions get to learn from each other so to speak.