Torch Ensemble serving

I’m wondering if anyone has used ray to server models in an ensemble for inference?
Serving ML Models — Ray v1.8.0 references ensembles and pages like Model Ensembling - YOLOv5 Documentation

Looking for any pointers/gists/examples…thx

Hi @puntime_error , this is a feature we discussed in serve and actively planning on it as well. You can see a public doc for it: [Public] [Serve] Pipeline Proposal Draft - Google Docs , feel free to leave your comments and feedback !

Check this out Pipeline API (Experimental) — Ray v2.0.0.dev0