Ray RLlib environment with Ray Tune parameters

I would like to use Ray RLlib and Ray Tune so that my FooEnv (which extends MultiAgentEnv) can try different hyper-parameters coming from Ray Tune.

However, I do not see how to register my FooEnv with the set of hyperparameters coming from Tune. Is there any example where I could investigate that?

Hey @XavierM , could you try this?

tune.register_env("my_foo_env", lambda env_ctx: FooEnv(env_ctx))  # <- use env_ctx as a config dict inside your env's c'tor. This is where the tune hyperparams should arrive (see below).

tune.run("PPO", {
    "env": "my_foo_env",
    "env_config": {
        "num_foo_robots": tune.grid_search([2, 4, 42]),
        "type_bar_weapons": tune.grid_search(["laser", "light-saber"]),
    }
})

Does the above make sense?

Yes, this solves what I wanted to do. Thank you so much @sven1977!

With hindsight, I now realise that with more experience I should also have found a part of the answer in https://github.com/ray-project/ray/blob/master/doc/source/rllib-env.rst.