Tune.choice into integer type

I am trying hyperparameter tuning using Ray-tune.

current my tune_config is shown in below code

self.tune_config = {
    "batch_size": tune.choice([128, 256, 512]),
    "epoch": tune.choice([50, 100, 200]),
    "sequence_length": tune.choice([128, 256]),
}

and when I apply this config as below code, the error occurs

if config.tune:
    self.batch_size = int(tune_config["batch_size"])
    self.sequence_length = int(tune_config["sequence_length"])

enter image description here

I referenced this link (How to use Tune with PyTorch — Ray 1.11.0)

How can I apply tune.choice options to integer?

Sorry for my poor English skills and thanks for reading.

Hi @JS_H, can you share the script you are running?

If you are trying to access the configuration value inside the Trainable function, you should be retrieving the generated hyperparameter value with config["batch_size"], which will be an integer value.

Hi, @matthewdeng Thx for reply. Can you tell me more details about your answer? (sorry for my poor English skills)

By the way, my codes are

  1. Main Codes
    def run(self):

        self._gridsearch(num_samples=10, max_num_epochs=10, gpus_per_trial=2)

        # Clean up after finishing
        self._clean_up()

    def _gridsearch(self, num_samples, max_num_epochs, gpus_per_trial):

        # Load DataSet
        self._load_dataset()
        feature_dim = self.stock_data_reader.get_input_shape()

        # Model
        self.model = model_REGISTRY[self.config.model](
            feature_dim, self.config, self.tune_config
        ).to(self.config.device)

        # Denoising
        self.denoising_model = denoising_REGISTRY[self.config.denoising](
            self.config
        ).to(self.config.device)

        if self.config.checkpoint_path != "" and self._load_model() is False:
            return

        # Test Mode
        if not self.config.training_mode:
            self.test()
            return

        # Learner
        self.learner = learner_REGISTRY[self.config.learner](
            self.denoising_model, self.model, self.logger, self.config, self.tune_config
        )
        if self.config.use_cuda:
            self.learner.cuda()

        scheduler = ASHAScheduler(
            metric="loss",
            mode="min",
            max_t=max_num_epochs,
            grace_period=1,
            reduction_factor=2,
        )
        reporter = CLIReporter(
            metric_columns=["loss", "training_iteration"]
        )
        result = tune.run(
            tune.with_parameters(self.train),
            tune.with_parameters(self.learner),
            tune.with_parameters(self.model),
            tune.with_parameters(self.stock_data_reader),
            resources_per_trial={"cpu": 2, "gpu": gpus_per_trial},
            config=self.tune_config,
            num_samples=num_samples,
            scheduler=scheduler,
            progress_reporter=reporter,
        )
        ~~~
        ~~~
  1. The codes where error occurs (stock_data_reader.py)
class PriceDataReader:
    def __init__(self, logger, config, tune_config):
        self.logger = logger
        self.config = config
        self.lag = config.lag  # pct_change
        self.batch_size = config.batch_size
        self.sequence_length = config.sequence_length

        if config.tune:
            self.batch_size = int(tune_config["batch_size"])
            self.sequence_length = int(tune_config["sequence_length"])
        ~~~
        ~~~

Currently default parameters are setting by config files.
However, during using ray-tune, I want to override parameters by tune_config like above code.

Please tell me if you need more information about my code.

Again, Thx for reply and reading my comments.

Hi @JS_H, the main piece we’re missing is the training function.

Your call:

        result = tune.run(
            tune.with_parameters(self.train),
            tune.with_parameters(self.learner),
            tune.with_parameters(self.model),
            tune.with_parameters(self.stock_data_reader),
            resources_per_trial={"cpu": 2, "gpu": gpus_per_trial},
            config=self.tune_config,
            num_samples=num_samples,
            scheduler=scheduler,
            progress_reporter=reporter,
        )

seems off - you should only supply a single trainable function, but it looks like you’re trying to submit 4 of them.

Your training function should probably resemble something like this: Getting Started — Ray 1.11.0

Minimal example:

def train(config):
    price_data_reader = PriceDataReader(..., tune_config=config)
    # ...

The error you’re getting comes up because you are trying to access Tune config parameters outside a training function.

1 Like

Hi, @kai, Thx for your reply. I will try your solution.
By the way, is there a way that I can return tune_config value as float or integer type?
for example, while my tune_config is below code,

self.tune_config = {
    "batch_size": tune.choice([128, 256, 512]),
    "epoch": tune.choice([50, 100, 200]),
    "sequence_length": tune.choice([128, 256]),
}

I want to get tune_config[“batch_size”] as integer in trainable function.
But when I do that, it doens’t work, because tune_config[“batch_size”] is ray.tune.sample type.
Is there any way to solve that?

The reason that I want to do this is, I am getting config (not tune config) from other files, but I want to override some parameters from tune_config. Therefore, as my first question, I want to do like self.batch_size = int(tune_config["batch_size"]). If I try this inside training function, will it be okay?

Thx for reply, again.

Hi, @kai , I fixed my problem by your solutions. Thx.
I wrapped all my functions into one trainable functions that has tune.config as inputs.
And I changed my original config by using setattr functions.
Thx!

1 Like