I did get the sense from the documentation that model can be loaded to memory and serve deployment can be created for this case, but just wanted to check if it’s possible to serve custom containerized model?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| About the Ray Serve category | 0 | 825 | November 17, 2020 | |
| Automating the serving of many different models | 8 | 1899 | May 3, 2023 | |
| Dynamically serve new model via Ray Serve | 5 | 150 | June 11, 2025 | |
| Ray serve on Kubernetes | 14 | 1045 | March 27, 2024 | |
| Best practice for loading deep learning models in production on Ray serve | 4 | 914 | October 27, 2022 |