Hello.
First of all, thank you for your contribution.
I’m wondering if i can two models(A, B) with 2 gpus for ‘A’ and 1 gpu for ‘B’ on 2 number of gpus which i have.
I wrote bellow example code. but I got hang without shutdown.
I’ve waited quite long but still got hang.
Could you let me know how to do that?
import ray
from ray.util.scheduling_strategies import PlacementGroupSchedulingStrategy
ray.init()
pg = ray.util.placement_group([
{"GPU": 2}, # Bundle for Model A
{"GPU": 1} # Bundle for Model B
])
ray.get(pg.ready())
@ray.remote(num_gpus=2)
def train_model_a(data):
import time
time.sleep(1) # Simulate training
return "ModelA trained with data"
@ray.remote(num_gpus=1)
def train_model_b(data):
import time
time.sleep(1) # Simulate training
return "ModelB trained with data"
data_a = "data for model A"
data_b = "data for model B"
future_a = train_model_a.options(
scheduling_strategy=PlacementGroupSchedulingStrategy(
placement_group=pg,
placement_group_bundle_index=0 # Bundle for Model A
)
).remote(data_a)
future_b = train_model_b.options(
scheduling_strategy=PlacementGroupSchedulingStrategy(
placement_group=pg,
placement_group_bundle_index=1 # Bundle for Model B
)
).remote(data_b)
results = ray.get([future_a, future_b])
print(results)
ray.shutdown()