How to use Ray Tune to tune environment parameters in RLLib in 2.7?

I have a custom environment, let’s call it CustomEnv. That environment has several parameters that I want to tune through Tune to find out which ones are working and which aren’t. What’s the correct way to do this with Ray 2.7? I tried several tune.run and tune.Tuner examples but they all get stuck and don’t do anything.

For normal training, I use something like this, which I hope is the current way of doing it:

env_config = {  
    "param1": "param1",
    "param2": "param2",
}

algo = (
    APPOConfig()
    .rollouts(num_rollout_workers=2)
    .framework("torch")
    .resources(num_gpus=1, num_gpus_per_worker=0.5)
    .environment(env=CustomEnv, env_config=env_config)
    .training(model={"fcnet_hiddens": [1024, 1024, 1024]})
    .debugging(log_level='ERROR', log_sys_usage=False)
    .build()
)

for i in range(10):
    result = algo.train()

Now I want to move that to Tune to narrow down the parameters to use for the environment in the end.

Let me know if any of what I’m currently doing isn’t aligned with the new APIs as well. Thanks!

Hi @Kacha, would the documentation help there: Getting Started with RLlib — Ray 2.7.0 ?

It shows in the second code snippet of the linked section, how Ray Tune could be used with RLlib.