Here’s a code example showing how to use a detached actor as the owner of objects you want to persist across deployment downscaling or replica termination. This ensures the object is not lost if the original deployment is removed (Ray Discourse: sharing objects with deployment, Ray Discourse: is it possible to share objects between different driver processes?):
import ray
# Create a detached actor to own shared objects
@ray.remote
class ObjectStore:
def __init__(self):
self.store = {}
def put(self, key, value):
self.store[key] = value
def get(self, key):
return self.store.get(key)
# Start Ray and create the detached actor
ray.init(address="auto", namespace="shared")
store = ObjectStore.options(name="shared_store", lifetime="detached", get_if_exists=True).remote()
# In your deployment or task, put the object with the actor as owner
data = [1, 2, 3]
obj_ref = ray.put(data, _owner=store)
ray.get(store.put.remote("my_data", obj_ref))
# Later, from any deployment or process, retrieve the object
retrieved_ref = ray.get(store.get.remote("my_data"))
print(ray.get(retrieved_ref)) # Output: [1, 2, 3]
This pattern ensures the object’s lifetime is tied to the detached actor, not the ephemeral deployment worker, so it survives autoscaling and downscaling events.
Sources:
- Ray Discourse: is it possible to share objects between different driver processes?
- Ray Discourse: sharing objects with deployment
Hint: Mention @RunLLM in the post for followups.