Raytune. Tuner Param_space reinforcement learning

  1. where can i find the documentation to know what parameters under Param_space that tuner takes

  2. for use with reinforcement learning, is the environment inserted in the parameter space?

i take an abstract out from RL with PBT below from your blog
tuner = tune.Tuner(
“PPO”,
tune_config=tune.TuneConfig(
metric=“episode_reward_mean”,
mode=“max”,
scheduler=pbt,
num_samples=1 if args.smoke_test else 2,
),
param_space={
“env”: “Humanoid-v2”,
“kl_coeff”: 1.0,
“num_workers”: 4,
“num_cpus”: 1, # number of CPUs to use per trial
“num_gpus”: 0, # number of GPUs to use per trial
“model”: {“free_log_std”: True},
# These params are tuned from a fixed starting value.
“lambda”: 0.95,
“clip_param”: 0.2,
“lr”: 1e-4,
# These params start off randomly drawn from a set.
“num_sgd_iter”: tune.choice([10, 20, 30]),
“sgd_minibatch_size”: tune.choice([128, 512, 2048]),
“train_batch_size”: tune.choice([10000, 20000, 40000]),
},
run_config=train.RunConfig(stop=stopping_criteria),

Where can i find the documentation to know what parameters under Param_space that tuner takes

https://docs.ray.io/en/latest/tune/tutorials/tune-search-spaces.html

for use with reinforcement learning, is the environment inserted in the parameter space?

RLlib’s AlgorithmConfig (which includes the environment config) is tuneable. See these resources: