Parallel Detectron2(Pytorch) inference with GPU

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

Problem: I’m unable to parallelize a function.

What the function does:

  1. Some stuff
  2. Load image from AWS S3.
  3. Preprocess image
  4. Make inference with a detectron2 model. (GPU)
  5. Apply rules to the output of the model inference.
  6. Return data

Does anyone know how to parallelize this function?

Do you want to have parallelized execution of many instances of this function over many images, or parallelize certain steps within this function?

1 Like

Assume the function takes an S3 URL, and returns data. You probably can parallelize running multiple functions on a list of S3 URLs (s3_url_list):

def process_image(s3_url):

results = [process_image.remote(url) for url in s3_url_list]
1 Like

I need to parallelized the whole function. I tried many different ways. The processes just don’t recognize the GPUs.

Here’s my code:

Imports .....

def fun_a_for_fun_to_parallel(...):

def fun_b_for_fun_to_parallel(...):

def fun_c_for_fun_to_parallel(...):

def fun_to_parallel(...):
    function call with model inference...
    return ...

One of the other scripts being imported into main script above:

from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor
more imports

cfg = get_cfg()
more configs
model = DefaultPredictor(cfg)

def fun(...):
    outputs = model(im)
    return ...

I’ve also tried replicating the model and assigned them to different GPUs(cuda:0, 1, 2…), which didn’t work.

I tried setting ngpus to like 2, 3, 4… for ray init.

Have you tried setting num_gpus=1 when converting a function to Ray remote function? e.g.

def fun_to_parallel(...):

More documentations are at GPU Support — Ray 1.12.1

1 Like

Another way to convert a function to Ray remote function, without @ray.remote decorator, is by calling ray.remote(func).options(num_cpus=xx).remote(args...)

1 Like

I tried using Ray Serve, and it’s now working. Thanks @Mingwei for your help!

@Mingwei @james811223
I am unable to use multi gpu while doing inference. I have raised an issue at Issue on page /serve/getting_started.html · Issue #27905 · ray-project/ray · GitHub
can you please help?