How severe does this issue affect your experience of using Ray?
- High: It blocks me to complete my task.
Code runs to
When “model=train. task. prepare_model (model)” is selected, pychar reports an error and outputs the prompt “ray. train. error. SessionMisseError: prepare/accelerate utility functions should be called inside a training function executed by ‘Trainer. run’”. Do I need to write any other code?
Hey, you’ll need to create a TorchTrainer
to run your code in a distributed fashion.
Can you take a look at the following resources and see if they help?
I have created a TorchTrainer . But I passed a lot of parameters directly in train_func, should this be avoided?
For example, trainer = TorchTrainer(
train_func(a,b,c,d,e,f,g),
scaling_config=ScalingConfig(use_gpu=use_gpu, num_workers=2)
)
Yeah, if you do that then it will be executing train_func(a,b,c,d,e,f,g)
and passing the result to TorchTrainer
, rather than the function itself!
You could try something like:
def updated_train_func():
return train_func(a,b,c,d,e,f,g)
trainer = TorchTrainer(
updated_train_func, # note that there is no ()
scaling_config=ScalingConfig(use_gpu=use_gpu, num_workers=2)
)
However, keep in mind that the whole function is serialized, so if any of the parameters a,b,c,d,e,f,g
are large (e.g. a dataset or a model), it would be a good idea to initialize them within the training function directly!
Ok, thank you very much for your answer, I will try to modify it!