Ray.data.read_csv Huge Dataset memory limitations

I am using ray air TorchTrainer within my ray cluster. The dataset is created from a csv file and then multiple workers work with it, the issue is that when calling TorchTrainer it appears that the file is read all into memory by just one node. Is there anywhere to avoid loading the complete file into memory?