Unable to import custom gym environment having multiple parameters

I am having issue while importing custom gym environment through raylib , as mentioned in the documentation, there is a warning that gym env registeration is not always compatible with ray. so we can pass our environment class name directly. but my custom env have more than one arguments and from the way defined i simply pass the required arguments , but it’s giving me an error that 2 positional arguments are missing , it’s not considering all the arguments i have given.
Also I have tried another ways but like feeding instance of my custom env but it’s not accpeting instance of the class.
Also defining the env class in a wrapper i.e env_creator , it also didn’t work.

class IntradayTradingEnv(gym.Env): def __init__(self, df: pd.DataFrame, num_stock_shares: list[int], tech_indicator_list: list[str], day = 0, initial=True, previous_state=[], model_name="", mode="", iteration="", n_days = 1, frame_stack = False, n_stack = 5, reward_fun = 1, use_multiCore = False, cwd = "./../Archive/Active_Directory", reward_P = 2, reward_L = 2, closing_bias = -1, closing_period = 4 ) ->None:

`
import gymnasium as gym
import ray
from ray.rllib.algorithms import ppo
from application.PSW import config

ray.init(ignore_reinit_error=True)

config = ppo.PPOConfig().environment(env=env, env_config={“df”: df,
“num_stock_shares”: [0],
“tech_indicator_list”: config.INDICATORS })
algo = config.build()

for _ in range(3):
print(algo.train())

algo.stop()
`

`
TypeError Traceback (most recent call last)
Cell In[4], line 36
8 # def env_creator(env_config):
9 # return IntradayTradingEnv(
10 # df=env_config[“df”],
(…)
28 # enable_rl_module_and_learner=True,
29 # enable_env_runner_and_connector_v2=True,
33 config = ppo.PPOConfig().environment(env=env, env_config={“df”: df,

758 str(env.class.base) == “<class ‘gym.core.Env’>”
759 or str(env.class.base) == “<class ‘gym.core.Wrapper’>”
760 ):

TypeError: IntradayTradingEnv.init() missing 2 required positional arguments: ‘num_stock_shares’ and ‘tech_indicator_list’ was raised from the environment creator for rllib-single-agent-env-v0 with kwargs ({})
`

Have you tried instantiating an environment outside of rllib first?

env = env_creator(env_config)

Sometimes it can be easier to debug little pieces independent of the entire machinery of rllib.

I have tried that also but it shows me the error that “You can specify a custom env as either a class (e.g., YourEnvCls) or a registered env id (e.g., “your_env”).” But i don’t want to register my env as in the documentation it is clearly mention that you can pass name of the env also. but i got stuck as my env has multiple parameters to pass.

import gymnasium as gym
import ray
from ray.rllib.algorithms import ppo
from application.PSW import config

ray.init(ignore_reinit_error=True)

def env_creator(env_config):
    return IntradayTradingEnv(
        df=env_config["df"],
        num_stock_shares=env_config["num_stock_shares"],
        tech_indicator_list=env_config["tech_indicator_list"]
    )

algo_config = ppo.PPOConfig().environment(
    env=env_creator,  
    env_config={
        "df": df,
        "num_stock_shares": [0],
        "tech_indicator_list": config.INDICATORS
    }
)

algo = algo_config.build()

for _ in range(3):
    print(algo.train())

algo.stop()

that’s what i did , if you have any recommendation then do tell me please

@Ramandeep_Singh ,

I was suggesting something like this:

import gymnasium as gym
from application.PSW import config

def env_creator(env_config):
    return IntradayTradingEnv(
        df=env_config["df"],
        num_stock_shares=env_config["num_stock_shares"],
        tech_indicator_list=env_config["tech_indicator_list"]
    )

env_config={
        "df": df,
        "num_stock_shares": [0],
        "tech_indicator_list": config.INDICATORS
}

env = env_creator(env_config)

print(env.reset())
print(env.step("A RANDOM ACTION")) # probably env.action_space.sample()
env.close()

then how can i pass it to the ray , as it is clearly mentioned that env we are passsing should be a name of the environment rather than any instance or function

algo_config = ppo.PPOConfig().environment(
    env=env_creator,  
    env_config=env_config
)

algo = algo_config.build()

@Ramandeep_Singh,

TypeError: IntradayTradingEnv.init() missing 2 required positional arguments: ‘num_stock_shares’ and ‘tech_indicator_list’ was raised from the environment creator for rllib-single-agent-env-v0 with kwargs ({})

That error means you have some issue with env_creator and/or env_config.

Once you figure out why you are getting errors in env creation and fix them in the example I shared. Then you go back to how you were doing it originally and add whatever changes you made to fix it.