[Bug] Env must be one of the supported types: BaseEnv, gym.Env, MultiAgentEnv, VectorEnv, RemoteBaseEnv

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

When testing my custom multi-agent environment using ‘check_env’ function,The result shows that my environment is not extended from ‘MultiAgentEnv’, which is shown as follows:

However, I did extend my environment from MultiAgentEnv, which is shown as follows:

Could anyone help me to figure out what is wrong with it, thanks.

I think the error message comes from the pre_check for the environment. I don’t know where it is called though.

Anyways you could do a first check like this, since this is what the checker scripts do as well:

print(isinstance(MultiAgentReroutingEnv, MultiAgentEnv))

for examples on how to implement a multi agent env you could look here:

Thanks for your timely reply.

Yes, the error did come from pre_check for the environment. I also test my environment using ‘print(isinstance(MultiAgentReroutingEnv, MultiAgentEnv))’, which gives the same error.

In fact, I have no idea why this error could happen, I mean, I did extend my environment from MultiAgentEnv as required.

So is there any other thing that I overlook when I extends my environment from MultiAgentEnv?

Hi @ZKBig,

Can you share the snippet of code where you define and register the env and pass it to the trainer before calling tune?

@mannyv It is Okay, but before displaying the snippet, I need to say that when testing ‘BasicMultiAgent’ class in ray-project/ray/blob/master/rllib/examples/env/multi_agent.py using ‘print(isinstance(BasicMultiAgent, MultiAgentEnv))’, it also returns False, which is quite weird.

The snippet is shown as follows:

@ZKBig,

I cannot see it here but you need to register either a function that will return am instance of the environment class, or the environment class itself. I cannot tell for sure from what you have shared but I think you may be registering an instance of the environment.

@mannyv Thanks for your patience. Actually, ‘env’ is a callable instance, which is sent to ‘register_env’ function defined by tune. In fact, I do not think the problem comes from registration process because I once used the same steps to train a single-agent environment and it worked successfully. However, just as I posted, it seems that extending from ‘MultiAgentEnv’ does not work, which is quite bothering.

If I overlook something, please correct me, thank you!

@ZKBig i think you’re not supposed to pass an env instance. As @mannyv said you need to pass a python class or a function that returns an instance. As he said its not visible from your code what the second return of your create env function is. Take a loot here, it should work if you do it like that:
https://docs.ray.io/en/latest/rllib/rllib-env.html#configuring-environments
From your code it looks like you want to pass additional arguments to the env constructor via a dict. That can be accomplished by using your env constructor like in the second example from the docs. Example:

def env_creator(env_config):
    return MyEnv(env_config, your_custom_arguments)  # env_config isn't even necessary here

register_env("my_env", env_creator)
trainer = ppo.PPOTrainer(env="my_env")

As alternative you could also place your “settings” in your env_config dict and just pass your custom env class to the trainer. Then you only would have to pick them from the env_config in your constructor.

env_config = {
   my_settings:{}
}
trainer = ppo.PPOTrainer(env=MyEnv, config={
    "env_config": env_config
})

@Blubberblub Thanks for your patience and detailed help. I finally solve this problem by changing the method of environment registration process.

However, there is another question:
I want to apply a trained policy obtained from a single agent scenario to a multi-agent scenario, and every agent should use this same trained policy. Could you please give some tips to implement this function in rllib, thank you!

This definitely appears to be a bug. The same error is produced from running the official example of a custom env in ray/custom_env.py at master · ray-project/ray · GitHub

2 Likes

Upgrading to Ray 2.3.0 appears to resolve the issue