Ray and Hydra integration

How severe does this issue affect your experience of using Ray?

  • None: Just asking a question out of curiosity

It would be interesting to be able to integrate hydra configuration with tune. The aim would be to simplify the process of injecting hyper parameters generated by tune into a hydra based configuration.

As a first run approach I have defined my hyperparameter names to be the dot path to where the hp is stored in the configuration.

eg with the following config

model:
    encoder:
        hidden_size: 128
        dropout: 0.1
dataset:
    balance_ratio: 0.3

I am currently defining hyperparameters like so (search space defined for SigOpts suggester, but doesn’t necessarily have to be) and then using OmegaConf.update to update the configuration via the dot paths

search_space:
- name: dataset.balance_ratio
  type: double
  bounds:
    max: 0.1
    min: 0.000001
- name: model.encoder.hidden_features
  type: int
  bounds:
    min: 4
    max: 9

My first thoughts about how this might be achieved would be to embed the hyperparameters in the configuration itself and have tune detect and resolve the hyperparameters using custom OmegaConf resolvers.

eg so the final hydra config might look something like this

model:
  encoder:
    hidden_size:
      __tuneparameter__: true
      type: int
      bounds:
        min: 4
        max: 9
    dropout: 0.1
dataset:
  balance_ratio:
    __tuneparameter__: true
    type: double
    bounds:
      min: 0
      max: 1

I’m posting another comment here we got from @addisonklinke on Slack:

"
Essentially I would aim for

  1. Some interface on the model class which defines reasonable defaults for the search space. Preferably this would support Hydra structured configs since those have advantages (type safety, inheritance, composability) over plain YAML files
    a. Something similar for the dataloader (i.e. pytorch_lightning.datamodule) since that is often part of the hparam consideration
    b. Like I noted in the FIXME snippet, the difficulty is you can’t directly use tune.choice or other samplers in a structured config, each key needs to be a primitive or another structured config. Any easy way to accomplish this would be TuneChoiceConf = hydra_zen.builds(tune.choice) with this nice extension library. The devs on that project might also be good references to talk to
  2. A way to expose the above config for CLI modifications

The above items are standard integration practices which wouldn’t necessarily require modifications to the Raytune library, although maybe there are some helper functions/classes which could be provided to reduce boilerplate code
"

And my API proposal was something like this:

@hydra.main(config_path="hydra-config.yaml")
def train(cfg, checkpoint_dir=None):
    alpha = cfg.parameters.alpha
    beta = cfg.parameters.beta
    # ...

tune.run(
    train,
    config={
         "beta": tune.grid_search([1, 2, 3])
    },
    # ...
)

@Nintorac would this be useful for you? If not, can you provide an API example that would be?

I guess an alternative could be

def train(config, checkpoint_dir=None):
    alpha = config["alpha"]
    beta = config["beta"]
    # ...

tune.run(
    train,
    config=load_hydra_config("hydra-config.yaml"),
    # ...
)

For me the annoying part is having to insert the proposed parameter into the configuration as an extra step.

I also have to ensure the dot paths that I define in the search space actually map to real parameters being passed into the model, not too difficult but adds surface area for errors.


my config structure currently looks like this

config
├── dataset
│   ├── local.yaml
│   └── sepsis.yaml
├── dev.yaml
└── experiment
    ├── resnet1d.yaml
    └── rnn.yaml

dev.yaml looks like this

defaults:
  - dataset: sepsis
  - experiment: rnn

rnn.yaml

model:
  type: rnn
  config:
    layers: 3
    in_features: null 
    out_features: null
    hidden_features: 5

search_space:
- name: experiment.lr
  type: double
  bounds:
    max: 0.01
    min: 0.000001
- name: experiment.model.config.hidden_features
  type: int
  bounds:
    min: 4
    max: 9
- name: experiment.model.config.dropout
  type: double
  bounds:
    min: 0
    max: 1
lr: 0.001

and dataset/sepsis.yaml

cohort_name: all_patients
vitals: 
  - heart_rate
  - temperature
  - bp_mean
  - bp_systolic
  - bp_diastolic
diagnoses: 
  - sepsis
meta_database: testbucketed_meta
database: testbucketed_mimic_iv
data_root: ~/data

With this I don’t see a clear way to define hyperparameters over the dataset without some boilerplate to combine the search space dictionaries it also wouldn’t work for more dynamic configurations where the dot path to the hyperparameter may not be known ahead of time (no actual use cases for this at the moment however so may be over thinking it)

Ideally I just want to be able to define the search space directly where it will be used and then tune looks through the config for tunable hyperparameters and transparently replaces the search space definition with the suggested value.

eg rnn.yaml would turn into something like

model:
  type: rnn
  config:
    layers: 3
    in_features: null 
    out_features: null
    hidden_features:
      __tuneparameter__: true
      type: int
      bounds:
        min: 4
        max: 9
        
lr:
  __tuneparameter__: true
  type: double
  bounds:
    max: 0.01
    min: 0.000001

From your proposal I am not really sure I understand what is happening that is different from how it currently works, there may be some disconnect as I have only used the sigopt suggester though

Ok, this sounds more like the second alternative. Basically here we would use load_hydra_config to parse the yaml files and construct the search space (and the constant parameters). Does that make sense?

In the tuning function, would you be ok with accessing your parameters with config["experiment]["lr"] etc, or would you prefer the hydra-style cfg.experiment.lr? And would this be a hard requirement or is it mostly about defining and parsing the config for you?

I do think it’s best to keep to using hydra (OmegaConf really) style access, IMO it looks cleaner but I don’t think this is a functional requirement.

One difficulty I think that may be encountered is the unstandardised ways that suggesters have of defining their search spaces. I am not sure how your proposal handles this (eg. the suggester would already be initialised by the time you run the load_hydra_config function).

And fwiw my project currently looks more like this

@hydra.main(...)
def main(config: OmegaConf):
    # do stuff to setup the train and tune
    
    tune.run(train, ....)

def train(config):
    # do stuff to setup and run train
    model = hydra.utils.instantiate(model)
    dataset = hydra.utils.instantiate(dataset)
    train_model(model, dataset)

So variable access is kind of implicit anyway with hydra.utils.instantiate

Ah interesting! So if Tune were able to parse an OmegaConf object into a tune-compatible config that would work for you?

Then we wouldn’t even need a load_hydra_config anymore, we could just auto-detect OmegaConf and convert from there.

+1 on this idea, I have implemented something very similar to this but with just regular yaml configs, not hydra. It would be amazing to have this functionality come out of the box with hydra usage.