How severe does this issue affect your experience of using Ray?
- High: It blocks me to complete my task.
Problem: I’m unable to parallelize a function.
What the function does:
- Some stuff
- Load image from AWS S3.
- Preprocess image
- Make inference with a detectron2 model. (GPU)
- Apply rules to the output of the model inference.
- Return data
Does anyone know how to parallelize this function?
Do you want to have parallelized execution of many instances of this function over many images, or parallelize certain steps within this function?
1 Like
Assume the function takes an S3 URL, and returns data. You probably can parallelize running multiple functions on a list of S3 URLs (s3_url_list
):
@ray.remote
def process_image(s3_url):
......
results = [process_image.remote(url) for url in s3_url_list]
1 Like
I need to parallelized the whole function. I tried many different ways. The processes just don’t recognize the GPUs.
Here’s my code:
Imports .....
def fun_a_for_fun_to_parallel(...):
...
def fun_b_for_fun_to_parallel(...):
...
def fun_c_for_fun_to_parallel(...):
...
@ray.remote
def fun_to_parallel(...):
stuff...
function call with model inference...
stuf...
return ...
One of the other scripts being imported into main script above:
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor
more imports
cfg = get_cfg()
more configs
model = DefaultPredictor(cfg)
def fun(...):
stuff
outputs = model(im)
stuff
return ...
I’ve also tried replicating the model and assigned them to different GPUs(cuda:0, 1, 2…), which didn’t work.
I tried setting ngpus to like 2, 3, 4… for ray init.
Have you tried setting num_gpus=1
when converting a function to Ray remote function? e.g.
@ray.remote(num_gpus=1)
def fun_to_parallel(...):
...
More documentations are at GPU Support — Ray 1.12.1
1 Like
Another way to convert a function to Ray remote function, without @ray.remote
decorator, is by calling ray.remote(func).options(num_cpus=xx).remote(args...)
1 Like
I tried using Ray Serve, and it’s now working. Thanks @Mingwei for your help!