Failed to register a custom

I meet this error when trainning my custom env:

Env must be one of the supported types: BaseEnv, gym.Env, MultiAgentEnv, VectorEnv, RemoteBaseEnv

But I have registered my env as this topic said:

import ray
import gymnasium as gym
from ray import tune
from ray.rllib.algorithms import ppo, sac
from ray.rllib.agents.ppo import PPOTrainer
from envs.hunter import Hunter, Hunter_config
from ray.tune import register_env

def env_creater(Hunter_config):
    return Hunter(config=Hunter_config)

register_env("hunter", env_creater)

trainer = PPOTrainer(env="hunter")

Are there any problems in my code?

Hi @jiangzhangze,

what this error is basically telling you is that, if you want to use an environment with RLlib you can only do so, if the environment inherits from a certain class (with classes listed in the error).

So you might either want to inherit from the gym environment (this is the easiest way, but not always possible) or inherit from a class given by the RLlib module itself, e.g. the ExternalEnv in case your environment does pull actions in and push observations out like in some external simulators.

See here for some more information about the environments in RLlib. See here for some examples.

Hi @Lars_Simon_Zehnder
In fact I have inherited from gym.Env .Here’s part of my code:

import gymnasium as gym
from fluid_mechanics.simulation import simulation
import numpy as np
import gmsh
from fluid_mechanics.area import *
import random
Hunter_config = {
    "l": 0.01,
    "c_x": 0.2,
    "c_y": 0.2,
    "o_x": 0.3,
    "o_y": 0.2,
    "r": 0.05,
    "r2": 0,
    "x_min": 0.4,
    "y_min": 0.05,
    "length": 0.3,
    "num": 100,
    "action_space": gym.spaces.Box(low=np.array([0, 0]), high=np.array([2.2, 0.41]), dtype=np.float16),
    "observation_space": gym.spaces.Box(low=-1, high=1, shape=([100 * 2 * 7 * 1600, ]), dtype=np.float16)

class Hunter(gym.Env):
    def __init__(self, config):
        super(Hunter, self).__init__()
        # mesh generation parameters
        disable_env_checking = True
        self.l = config.get("l", 0.01)
        self.c_x = config.get("c_x", 0.2)
        self.c_y = config.get("c_y", 0.2)
        self.o_x = config.get("o_x", 0.3)
        self.o_y = config.get("o_y", 0.2)
    def step(self, action):
    def rest(self):
    def render(self):

Hi @jiangzhangze,

This was a bug. It should be fixed in ray 2.3 and nightly.


I am using Python 3.7.5. Does ray 2.3 work with Python 3.7?
If not, is there any way to fix this bug without updating to ray 2.3?