Hi @Advertise-Here , thanks for raising this issue and welcome to the forum.
When giving a custom module you need to provide a ray.rllib.core.models.catalog.Catalog
The definition of a custom RLModule
is here, however, not required - as long as you do not want to use a custom module you programmed.
So, if you could try again without using the .rl_module
method in the configuration, this should work:
config = (
DQNConfig()
.environment(env=HI_Environment, env_config=env_config)
.multi_agent(
policies={"policy_alice", "policy_bob"},
policy_mapping_fn=lambda agent_id: f"policy_{agent_id}",
policies_to_train=["policy_alice", "policy_bob"],
)
.api_stack(
enable_rl_module_and_learner=True,
enable_env_runner_and_connector_v2=True,
)
.framework("torch")
.env_runners(num_env_runners=1)
.training(dueling=False, double_q=False)
)