TPU with Ray Tune

Looking at both the tutorial on Pytorch official webpage and Ray official guidelines, it is not clear to me if it is possible to use TPU to perform hyperparameter search with Ray Tune. I am pretty new with this tool, and it appears that it offers full support only for CPU and GPU. In that case, I was wondering if moving all the TPU-related code inside the training loop/function (so, somehow constraining the execution on TPU) could be a feasible workaround. Any idea or suggestion regarding this? Thanks and sorry for the possibly naïve question.

2 Likes

I’m also wondering this, could someone from ray team answer?

@raquelhortab could you say more about the workload you are trying to run?
It’s helpful for us to prioritize the work.

Also this ticket may be relevant: https://github.com/ray-project/ray/issues/25223

Hi!
I am conducting a research project and I need to tune a few deep learning models on large data. I was considering using google’s TPU’s to avoid running the experiments locally (since they take quite a lot of time and I don’t have much computational resources).

I’m currently running some tests locally with ray and if it is easy to run on google’s TPU I would really consider using ray for my project. However, it also depends on some memory issues I’ve been having with ray tune so even if feasible, I wonder if this would be an issue with TPUs as well.

Thanks for answering! @xwjiang2010