Ray.init() fails complaining about missing JobConfig attribute '_parsed_runtime_env'

I have a k8s cluster with a ray cluster installed running with ray 1.8.0 nicely. Lately I was looking at upgrade options to see if some functionality (object store performance) would benefit from an upgrade. However, as soon as I install a version greater than 1.8.0, I get an error: AttributeError: 'JobConfig' object has no attribute '_parsed_runtime_env'
Any advice what I could be missing/doing wrong or not have configured correctly?
There seems to be almost no discussion about this for something that should perhaps have affected many people…

Here’s the full stack trace.

ray, version 1.11.0

initializing ray...
Traceback (most recent call last):
  File "test_connect_1.11.0-min.py", line 9, in <module>
    ray_client = ray.init(address=f"ray://127.0.0.1:10001", _redis_password='5241590000000000') #, namespace=ns)
  File "/usr/local/anaconda3/envs/ejops/lib/python3.9/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/anaconda3/envs/ejops/lib/python3.9/site-packages/ray/worker.py", line 800, in init
    return builder.connect()
  File "/usr/local/anaconda3/envs/ejops/lib/python3.9/site-packages/ray/client_builder.py", line 151, in connect
    client_info_dict = ray.util.client_connect.connect(
  File "/usr/local/anaconda3/envs/ejops/lib/python3.9/site-packages/ray/util/client_connect.py", line 33, in connect
    conn = ray.connect(
  File "/usr/local/anaconda3/envs/ejops/lib/python3.9/site-packages/ray/util/client/__init__.py", line 228, in connect
    conn = self.get_context().connect(*args, **kw_args)
  File "/usr/local/anaconda3/envs/ejops/lib/python3.9/site-packages/ray/util/client/__init__.py", line 88, in connect
    self.client_worker._server_init(job_config, ray_init_kwargs)
  File "/usr/local/anaconda3/envs/ejops/lib/python3.9/site-packages/ray/util/client/worker.py", line 697, in _server_init
    raise ConnectionAbortedError(
ConnectionAbortedError: Initialization failure from server:
Traceback (most recent call last):
  File "/home/ray/anaconda3/lib/python3.8/site-packages/ray/util/client/server/proxier.py", line 621, in Datapath
    self.reconnect_grace_periods[client_id] = \
  File "/home/ray/anaconda3/lib/python3.8/site-packages/ray/util/client/server/proxier.py", line 272, in start_specific_server
  File "/home/ray/anaconda3/lib/python3.8/site-packages/ray/job_config.py", line 99, in get_serialized_runtime_env
    pb.ray_namespace = str(uuid.uuid4())
AttributeError: 'JobConfig' object has no attribute '_parsed_runtime_env'

Here is a simple test code sample that produces the error (I have port forwarded the port 10001 from k8s to localhost - same number):

import json
import subprocess as sp
import ray.job_config

ns = 'serve'
try:
    print(f"{sp.check_output('ray --version'.split()).decode()}")
    print(f"initializing ray...")
    ray_client = ray.init(address=f"ray://127.0.0.1:10001", _redis_password='5241590000000000') 
    print(f"initializing ray...  done!")
except RuntimeError as re:
    print(f"exception seen: {re}")

print(json.dumps(ray.nodes(), indent=2))
ray.shutdown()

The same code run to a local ray works:

ray, version 1.11.0

initializing ray...
initializing ray...  done!
[
  {
    "NodeID": "b16b50dcf7ef2ca92c39f04b7a2d455d044830d056dcd9258e5baaa9",
    "Alive": true,
    "NodeManagerAddress": "127.0.0.1",
    "NodeManagerHostname": "MacBook-Pro.local",
    "NodeManagerPort": 56329,
    "ObjectManagerPort": 56328,
    "ObjectStoreSocketName": "/tmp/ray/session_2022-03-09_11-19-02_987257_97634/sockets/plasma_store",
    "RayletSocketName": "/tmp/ray/session_2022-03-09_11-19-02_987257_97634/sockets/raylet",
    "MetricsExportPort": 61581,
    "alive": true,
    "Resources": {
      "node:127.0.0.1": 1.0,
      "memory": 5633810023.0,
      "object_store_memory": 2147483648.0,
      "CPU": 16.0
    }
  }
]
1 Like

you might have a different ray version in your client and in the cluster.

I’m also facing the same issue. Mentioned attribute “JobConfig” was optional in ray 1.9.x and it becomes mandatory.

Ray Core API — Ray 1.11.0 - Also no proper explanation for that attribute.

@architkulkarni / @ckw017 maybe one of you knows?

Can you post a stack trace of the error if possible? I suspect this is a versioning issue as well (check the version of ray where you’re calling “ray.init” and compare to the version of ray on the head node of your cluster).

Agree on checking the versions. One way to do it is to run ray.__version__ and ray.__commit__ in Python, or ray --version from the command line.

@ckw017 / @architkulkarni

Thanks for your immediate responses. I had same version of ray in my cluster and the server where ray.init() was calling doesn’t have the same version.

Now resolved.

1 Like