Search_algo error in Tune

Hi everyone,
I used Tune and RLlib (release 1.13.0) and all works good, but when I include a search algorithm an error is presented:

Traceback (most recent call last):
  File "c:\Users\grhen\Documents\GitHub\EP_RLlib\EPRLlib_MA-server(Tune).py", line 369, in <module>
    analysis = tune.run(
  File "C:\Users\grhen\AppData\Local\Programs\Python\Python39\lib\site-packages\ray\tune\tune.py", line 596, in run
    if config and not searcher_set_search_properties_backwards_compatible(
  File "C:\Users\grhen\AppData\Local\Programs\Python\Python39\lib\site-packages\ray\tune\suggest\util.py", line 31, in set_search_properties_backwards_compatible
  File "C:\Users\grhen\AppData\Local\Programs\Python\Python39\lib\site-packages\ray\tune\suggest\bayesopt.py", line 420, in <dictcomp>    bounds = {"/".join(path): resolve_value(domain) for path, domain in domain_vars}
TypeError: sequence item 1: expected str instance, int found

My configuration is:

algo = BayesOptSearch(
        metric='episode_reward_mean',
        mode='max',
    )
    
    analysis = tune.run(
        args.run,
        config=config,
        stop=stop,
        verbose=2,
        # if you would like to collect the stream outputs in files for later analysis or
        # troubleshooting, Tune offers an utility parameter, log_to_file, for this.
        log_to_file=True,
        # name of your experiment
        name="experimento_2023-02-09_2",
        # a directory where results are stored before being
        # sync'd to head node/cloud storage
        local_dir="C:/Users/grhen/Documents/RLforEP_Resultados",
        # sync our checkpoints via rsync
        # you don't have to pass an empty sync config - but we
        # do it here for clarity and comparison
        sync_config=sync_config,
        scheduler=asha_scheduler,
        search_alg=algo,
        # we'll keep the best five checkpoints at all times
        # checkpoints (by AUC score, reported by the trainable, descending)
        checkpoint_score_attr="max-episode_reward_mean",
        keep_checkpoints_num=5,
        # a very useful trick! this will resume from the last run specified by
        # sync_config (if one exists), otherwise it will start a new tuning run
        resume= "AUTO", #True, False, to resume the experiment or not, or AUTO, which will attempt to resume the experiment if possible, and otherwise will start a new experiment.
        )

There are any cloud how to fix it?
Thanks

Hi @hermmanhender Can you try to upgrade an see if the problem persists?