How severe does this issue affect your experience of using Ray?
- Medium: It contributes to significant difficulty to complete my task, but I can work around it.
from pettingzoo.mpe import simple_spread
register_env("env", lambda _: ParallelPettingZooEnv(simple_spread.parallel_env()))
# Policies are called just like the agents (exact 1:1 mapping).
policies = {f"agent_{i}" for i in range(2)}
base_config = (
PPOConfig()
.api_stack(
# enable_env_runner_and_connector_v2=True, ## THIS CAUSING FAILURE
enable_rl_module_and_learner=True)
.experimental(
_enable_new_api_stack=True,
)
.environment("env")
.multi_agent(
policies=policies,
# Exact 1:1 mapping from AgentID to ModuleID.
policy_mapping_fn=(lambda aid, *args, **kwargs: aid),
)
.training(
vf_loss_coeff=0.005,
)
.rl_module(
model_config_dict={"vf_share_layers": True},
rl_module_spec=MultiAgentRLModuleSpec(
module_specs={p: SingleAgentRLModuleSpec() for p in policies},
),
)
)
# )
# run the tune search
config_dict = base_config.to_dict()
training_function = tune.with_resources(
training_func,
resources=base_config.algo_class.default_resource_request(base_config),
)
tuner = tune.Tuner(
training_function,
# Pass in your config dict.
param_space=config_dict,
)
tuner.fit()
Enabling the new connector API when running an experiment with Tune results in an abrupt and unexplained error. Error logs contain not of use as well.