Feasibility of Ray for heterogenous computational environments

How severe does this issue affect your experience of using Ray?
None. Just asking a question out of curiosity

Hi Ray Community,

The main question

Ray is designed to easily scale single-machine code, aka a homogeneous computing environment. However, with recent feature additions such as runtime_env and its container option, can Ray be used to scale heterogeneous computing environments? If so, what would be the best approach for doing that? Any guidance would be appreciated.

Context

My organization is going through some retrospection about the workflows and frameworks currently used to execute our analysis code and optimizations. Although we are not doing anything related to Machine Learning, our analytical pipeline involves engineering applications with varying time complexity - from seconds to hours or maybe even days - and Ray Core provides the kind of tooling we need to deploy our analytical codes at scale. The only problem is that it is not practical, if possible, to put all engineering applications into a single computing environment. Rather, we are containerizing each engineering application in its own dedicated environment, i.e. Dockerfile/Docker image. Given this scenario, is Ray still a feasible option to use for deploying the different analytical codes with dedicated containers at scale? Could this be achieved by setting the appropriate runtime_env configurations, or using Ray Serve to create a service for each engineering application, where each could be autoscaled independently?

I would love to make it work with Ray and get all the benefits that Ray provides for free, but I’m wondering if the container runtime_env is still a little too experimental to get this done, and if our scenario is too far away from “single-machine code” for which Ray was primarily designed.

Your thoughts are appreciated. Thanks!