In ray.tune
, one of the variables I’m optimizing over, call it bins
, has type list[torch.Tensor]
. The list length is around 300, and each tensor has 8-32 elements. I’m using tune.choice
to sample different values for this parameter. When ray.tune
shows for best parameters so far, it logs every tensor in the list, which is too much.
To solve the issue, I did something like
class Bins(list):
def __repr__(self) -> str:
return "bins"
I then used this to create bins, hoping that everything wouldn’t get logged. But when ray logs the best value for Bins, it still shows every tensor in the list. Is there a way to disable this behavior?
If it helps, when I create a custom class where the bins are a private attribute, then ray.tune won’t log the tensors. Just wondering if there’s a better approach.
Thanks.