I tried to define the total number of Neuron to use and then split them using a custom integer partitioning function for layer definition, and then pass it into the network using a list of numbers:
config = {"lr": tune.loguniform(1e-4, 1e-1),
"batch_size": tune.choice([2, 4, 8, 16]),
"h_total": tune.choice([10, 20, 30]),
"h_branch": tune.sample_from(lambda spec: [ [N_FEATURES] + # Append Input Layer as Number of Features
split_sampling(num_ele = h_seg_ele,
n_min = min_neuron_per_layer,
out_dim = None) for h_seg_ele in spec.config.h_total ]),
}
But this doesn’t seem to work, it gives me error when I try to refer to it in the model:
model = Net(N_SEGMENT, config["h_branch"]).to(device)
it returns error:
----> 9 self.hidden.append(NN_Branch(Layers[s_id]))
10
TypeError: 'Function' object is not subscriptable
It seems like Ray Tune does not allow the parameters to be multidimensional… but that’s the only way to do it when you have a flexible combination of layer and neurons. Could someone shed some lights onto this?
(Update)
I tried to get around the problem by creating a list of the NN architecture, and use a list of indices as the config parameters for Ray Tune. But it also failed…