Is it possible to have dataloaders in RLlib?

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

I am using a custom environment. This environment relies on precomputed values to interact with the agent. You can consider it as all of the possible actions for all of the steps which is huge. This data is around 30GB, and I don’t know how many copies are required by RLlib as I am receiving an out-of-memory (OOM) error even with only one env-runner on a machine with 500GB of memory(it crashed after consuming 95% of the available memory). I wonder if it is possible to split this data into smaller chunks and load each chunk when required. If it is possible could you provide me with some examples or documents about how to do so?
One other possibility that I can imagine is to train the model on each chunk, save the model, load new data, and resume training the same model. Is it possible?