Referred here from the slack channel and was told to tag @sangcho. Posting the original Question here.
I’m wondering about how expensive it is to create and destroy ray actors. I’m working on an idea where I want to create and destroy many actors throughout a program’s life-cycle. I’m wondering if this might be an anti-pattern to how Ray was designed but thought I should ask before going down too far down the path.
Creating & destroying actors both have some overhead.
Scheduling overhead. Although Ray uses the decentralized scheduler for tasks, the actor scheduling is centralized (with the assumption that actors won’t be frequently created & destroyed).
Process startup (when actor is created. But Ray has a concept of IDLE worker, so it doesn’t always happen) & process destroy (whenever an actor is killed) overhead.
That makes sense. I’m currently playing around with a distributed version of genetic algorithms and finding a way to work with Ray in the best fashion so that a population, or group, of python objects, can have work can be done on them, and they can be compared after the fact.
One idea was to make the objects actors which seemed easier s.t. attributes that may be confusing to serialize and deserialize would be out-of-sight and out-of-mind for users, but that’s likely the more optimal route from the sounds of it.
import ray
ray.init()
@ray.remote
class Foo:
def method(self):
return 1
Then running this in IPython,
%%time
f = Foo.remote()
ray.get(f.method.remote())
gives me about 6ms.
In some cases (e.g., if a new worker needs to be created, it will take longer (e.g., 300 milliseconds)). Of course, if you’re starting a bunch of actors, that can be done in parallel.