Passing tensor through Ray Actors and getting gradients

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

Overall goal: to input a pytorch tensor into the method of a Ray Actor, perform some Pytorch operations on the tensor and output some result, call .backward() on the result and get the gradient of the input with respect to the result. An example here:

class MyActor:
    def compute(self, x):
        y = x * 2
        z = y * torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
        return z
actor = MyActor.remote()

# Create an input tensor with requires_grad=True
x = torch.ones(3, requires_grad=True)

# Pass the tensor to the actor and perform computation
result = actor.compute.remote(x)

# Get the result tensor and backpropagate
output = ray.get(result)
loss = output.sum()


Running this code, however, outputs None when printing x.grad instead of the gradient. How does one do this using Actors?

Actor is running remotely so compute actually receives a copy of x (thinking of pass-by-value) so changes won’t be reflected in the original argument.