Sorry to hear you are running into this issue! We unfortunately didn’t anticipate this problem.
A few questions:
Just to clarify, are you using the pip or conda field of runtime_env in your ray.init() command?
Do you mind sharing which country you’re in, and do you happen to know what exactly is blocked (is it all of Amazon S3? does this mean that all of the nightly links at Installing Ray — Ray v1.9.0 are broken in your country as well?)
There are a few short-term options here:
Don’t use the pip or conda field of runtime_env, and instead preinstall your environment on the cluster.
We are working on an enhancement to the pip field of runtime_env that changes the behavior in the following way. Previously, it would dynamically download and install the current Ray version on the cluster in a new isolated virtual environment. The new behavior is that it will only install the non-Ray packages in the pip field, and will inherit the Ray installation that is already present on the cluster. We will have this in Ray 1.10, but the change should be in the Ray nightly wheels by next week. This should circumvent your problem because it won’t download anything from AWS.
I solve the problem last weekend.
There are three different ways to solve.
1.Update dns.
2.Build the dockerfile image to preload the python environment without using the runtime_env field in ray.init.
3.When building the image, replace the ray whl file download address and point to your own object store.