I have a question about how Ray Tune handles GPU memory. Specifically, my understanding is that when trials only have access to CPUs, each worker is a python process and they have references to stuff in the object store, which itself lives in shared memory. This prevents each trial from creating a copy of training data in memory (ex. 2 trials share a CPU and each has a copy of training data). Do trials that use GPU have access to the object store as well? If not, does each trial have a copy of the data?