How severe does this issue affect your experience of using Ray?
- Medium: It contributes to significant difficulty to complete my task, but I can work around it.
I started a ray cluster on a local node with 100 cores. The first set of tasks load a lot of data so each of the ray remote processes will likely use >10GB of memory each. After that, only a few follow-up tasks will be started. But when I checked the system using htop, I noticed that those ray::IDLE processes still takes >10GB memory. It sometimes will cause OOM error. It seems that the memory is not released. Is it normal to see such a thing? Is there a setting that forces the worker to release memory?
I cannot post anything from my working computer due to security reasons.