How severe does this issue affect your experience of using Ray?
High: It blocks me to complete my task.
When testing my custom multi-agent environment using ‘check_env’ function,The result shows that my environment is not extended from ‘MultiAgentEnv’, which is shown as follows:
Yes, the error did come from pre_check for the environment. I also test my environment using ‘print(isinstance(MultiAgentReroutingEnv, MultiAgentEnv))’, which gives the same error.
In fact, I have no idea why this error could happen, I mean, I did extend my environment from MultiAgentEnv as required.
So is there any other thing that I overlook when I extends my environment from MultiAgentEnv?
@mannyv It is Okay, but before displaying the snippet, I need to say that when testing ‘BasicMultiAgent’ class in ray-project/ray/blob/master/rllib/examples/env/multi_agent.py using ‘print(isinstance(BasicMultiAgent, MultiAgentEnv))’, it also returns False, which is quite weird.
I cannot see it here but you need to register either a function that will return am instance of the environment class, or the environment class itself. I cannot tell for sure from what you have shared but I think you may be registering an instance of the environment.
@mannyv Thanks for your patience. Actually, ‘env’ is a callable instance, which is sent to ‘register_env’ function defined by tune. In fact, I do not think the problem comes from registration process because I once used the same steps to train a single-agent environment and it worked successfully. However, just as I posted, it seems that extending from ‘MultiAgentEnv’ does not work, which is quite bothering.
If I overlook something, please correct me, thank you!
@ZKBig i think you’re not supposed to pass an env instance. As @mannyv said you need to pass a python class or a function that returns an instance. As he said its not visible from your code what the second return of your create env function is. Take a loot here, it should work if you do it like that: https://docs.ray.io/en/latest/rllib/rllib-env.html#configuring-environments
From your code it looks like you want to pass additional arguments to the env constructor via a dict. That can be accomplished by using your env constructor like in the second example from the docs. Example:
def env_creator(env_config):
return MyEnv(env_config, your_custom_arguments) # env_config isn't even necessary here
register_env("my_env", env_creator)
trainer = ppo.PPOTrainer(env="my_env")
As alternative you could also place your “settings” in your env_config dict and just pass your custom env class to the trainer. Then you only would have to pick them from the env_config in your constructor.
@Blubberblub Thanks for your patience and detailed help. I finally solve this problem by changing the method of environment registration process.
However, there is another question:
I want to apply a trained policy obtained from a single agent scenario to a multi-agent scenario, and every agent should use this same trained policy. Could you please give some tips to implement this function in rllib, thank you!