Is there a way not to do parallel processing when training the model of ray?

I am checking the log because I need to check the data in the training phase of the model.
It seems to be processed four times under the same conditions because of parallel processing of ray.
Is there a way to train a model without parallelism?(serial processing?)
Which option should I change?
Thank you.

By not parallelising, do you mean just using a single worker or a single node? You can provide num_workers=1 in your RunConfig. I believe, this will limit to only a single worker.

Ray is purpose-built for distributed compute, so not sure why you want to force to do only serial. Perhaps I’m missing something here. Do elaborate, please.