Hi again, so, yes only 31Gb available (as set by --shm-size=Xgb). Any ideas? Running on GCP
WARNING: The object store is using /tmp instead of /dev/shm because /dev/shm has only 31457280000 bytes available. This will harm performance! You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you can increase /dev/shm size by passing ‘–shm-size=Xgb’ to ‘docker run’ (or add it to the run_options list in a Ray cluster config). Make sure to set this to more than 2gb.
I think this might be because Ray is mis-detecting the available memory trying to set the object store size to > 31GiB. Can you try setting the object store size explicitly to less than that? e.g.,
ray start --object-store-memory=10000000000 or ray.init(object_store_memory=10000000000)
I am using GCP as well, and I get the same error message. Is it possible to configure ray to use another folder entirely for shared memory? E.g. I want to use the folder $HOME$/NFS/dev/shared?
I am working in a system where the machine nodes have limited amounts of memory (~10GB max) and the users have very small personal harddisks (~5GB). Therefore I would like for ray to use the mounted NFS drive for it’s shared memory, is this possible?
Thank you for the answer! However, I am right now using the Pool function, is there a way to pass in the plasma_directory argument or the object_store_memory to the pool function call?
from ray.multiprocessing import Pool
with Pool(4) as p:
# Do Something