I cannot see it here but you need to register either a function that will return am instance of the environment class, or the environment class itself. I cannot tell for sure from what you have shared but I think you may be registering an instance of the environment.
@mannyv Thanks for your patience. Actually, ‘env’ is a callable instance, which is sent to ‘register_env’ function defined by tune. In fact, I do not think the problem comes from registration process because I once used the same steps to train a single-agent environment and it worked successfully. However, just as I posted, it seems that extending from ‘MultiAgentEnv’ does not work, which is quite bothering.
If I overlook something, please correct me, thank you!
@ZKBig i think you’re not supposed to pass an env instance. As @mannyv said you need to pass a python class or a function that returns an instance. As he said its not visible from your code what the second return of your create env function is. Take a loot here, it should work if you do it like that: https://docs.ray.io/en/latest/rllib/rllib-env.html#configuring-environments
From your code it looks like you want to pass additional arguments to the env constructor via a dict. That can be accomplished by using your env constructor like in the second example from the docs. Example:
return MyEnv(env_config, your_custom_arguments) # env_config isn't even necessary here
trainer = ppo.PPOTrainer(env="my_env")
As alternative you could also place your “settings” in your env_config dict and just pass your custom env class to the trainer. Then you only would have to pick them from the env_config in your constructor.
@Blubberblub Thanks for your patience and detailed help. I finally solve this problem by changing the method of environment registration process.
However, there is another question:
I want to apply a trained policy obtained from a single agent scenario to a multi-agent scenario, and every agent should use this same trained policy. Could you please give some tips to implement this function in rllib, thank you!