Error after a basic evaluate command

Hello community,

I don’t understand why I get this after a basic command rllib evaluate checkpoint_000002/ --algo PPO --env sar-v0.1 --config '{"num_workers": 1}'. May you have any idea about it ?

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /data/users/dahmadoun/conda-envs/gym_env/lib/python3.10/site-packages/ray/rllib/scripts.py:163   │
│ in evaluate                                                                                      │
│                                                                                                  │
│   160 │   """                                                                                    │
│   161 │   from ray.rllib import evaluate as evaluate_module                                      │
│   162 │                                                                                          │
│ ❱ 163 │   evaluate_module.run(                                                                   │
│   164 │   │   checkpoint=checkpoint,                                                             │
│   165 │   │   algo=algo,                                                                         │
│   166 │   │   env=env,                                                                           │
│                                                                                                  │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │            algo = 'PPO'                                                                      │ │
│ │      checkpoint = 'checkpoint_000002/'                                                       │ │
│ │          config = '{"num_workers": 1}'                                                       │ │
│ │             env = 'sar-v0.1'                                                                 │ │
│ │        episodes = 0                                                                          │ │
│ │ evaluate_module = <module 'ray.rllib.evaluate' from                                          │ │
│ │                   '/data/users/dahmadoun/conda-envs/gym_env/lib/python3.10/site-packages/ra… │ │
│ │      local_mode = False                                                                      │ │
│ │             out = None                                                                       │ │
│ │          render = False                                                                      │ │
│ │       save_info = False                                                                      │ │
│ │           steps = 10000                                                                      │ │
│ │  track_progress = False                                                                      │ │
│ │      use_shelve = False                                                                      │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                  │
│ /data/users/dahmadoun/conda-envs/gym_env/lib/python3.10/site-packages/ray/rllib/evaluate.py:222  │
│ in run                                                                                           │
│                                                                                                  │
│   219 │   evaluation_config = copy.deepcopy(                                                     │
│   220 │   │   config_args.get("evaluation_config", config.get("evaluation_config", {}))          │
│   221 │   )                                                                                      │
│ ❱ 222 │   config = merge_dicts(config, evaluation_config)                                        │
│   223 │   # Merge with command line `--config` settings (if not already the same anyways).       │
│   224 │   config = merge_dicts(config, config_args)                                              │
│   225 │   if not env:                                                                            │
│                                                                                                  │
│ ╭─────────────────────────────────────── locals ────────────────────────────────────────╮        │
│ │              algo = 'PPO'                                                             │        │
│ │        checkpoint = 'checkpoint_000002/'                                              │        │
│ │            config = <ray.rllib.algorithms.ppo.ppo.PPOConfig object at 0x7f3ece134910> │        │
│ │       config_args = {'num_workers': 1}                                                │        │
│ │        config_dir = 'checkpoint_000002'                                               │        │
│ │       config_path = 'checkpoint_000002/../params.pkl'                                 │        │
│ │               env = 'sar-v0.1'                                                        │        │
│ │          episodes = 0                                                                 │        │
│ │ evaluation_config = None                                                              │        │
│ │                 f = <_io.BufferedReader name='checkpoint_000002/../params.pkl'>       │        │
│ │        local_mode = False                                                             │        │
│ │               out = None                                                              │        │
│ │            render = False                                                             │        │
│ │         save_info = False                                                             │        │
│ │             steps = 10000                                                             │        │
│ │    track_progress = False                                                             │        │
│ │        use_shelve = False                                                             │        │
│ ╰───────────────────────────────────────────────────────────────────────────────────────╯        │
│                                                                                                  │
│ /data/users/dahmadoun/conda-envs/gym_env/lib/python3.10/site-packages/ray/_private/dict.py:22 in │
│ merge_dicts                                                                                      │
│                                                                                                  │
│    19 │   │    dict: A new dict that is d1 and d2 deep merged.                                   │
│    20 │   """                                                                                    │
│    21 │   merged = copy.deepcopy(d1)                                                             │
│ ❱  22 │   deep_update(merged, d2, True, [])                                                      │
│    23 │   return merged                                                                          │
│    24                                                                                            │
│    25                                                                                            │
│                                                                                                  │
│ ╭────────────────────────────────── locals ──────────────────────────────────╮                   │
│ │     d1 = <ray.rllib.algorithms.ppo.ppo.PPOConfig object at 0x7f3ece134910> │                   │
│ │     d2 = None                                                              │                   │
│ │ merged = <ray.rllib.algorithms.ppo.ppo.PPOConfig object at 0x7f3ecded0cd0> │                   │
│ ╰────────────────────────────────────────────────────────────────────────────╯                   │
│                                                                                                  │
│ /data/users/dahmadoun/conda-envs/gym_env/lib/python3.10/site-packages/ray/_private/dict.py:58 in │
│ deep_update                                                                                      │
│                                                                                                  │
│    55 │   override_all_if_type_changes = override_all_if_type_changes or []                      │
│    56 │   override_all_key_list = override_all_key_list or []                                    │
│    57 │                                                                                          │
│ ❱  58 │   for k, value in new_dict.items():                                                      │
│    59 │   │   if k not in original and not new_keys_allowed:                                     │
│    60 │   │   │   raise Exception("Unknown config parameter `{}` ".format(k))                    │
│    61                                                                                            │
│                                                                                                  │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │        allow_new_subkey_list = []                                                            │ │
│ │                     new_dict = None                                                          │ │
│ │             new_keys_allowed = True                                                          │ │
│ │                     original = <ray.rllib.algorithms.ppo.ppo.PPOConfig object at             │ │
│ │                                0x7f3ecded0cd0>                                               │ │
│ │ override_all_if_type_changes = []                                                            │ │
│ │        override_all_key_list = []                                                            │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'NoneType' object has no attribute 'items'

This is a string: --config '{"num_workers": 1}'

Do you need to pass it as a string or as a dict?

Hello,

Thank you for your answer.
Actually it is mentionned in the --help that it should be a text. But even without this parameter, with only rllib evaluate checkpoint_000002/ --algo PPO --env sar-v0.1 I get the same error.