Hi!
This is the first time I use RLLib. I try to train a VSL solution in Sumo simulator with my own environment (4 agents, discrete action space, DQN), however I get this error, when I try to run the training:
ValueError: RolloutWorker has no input_reader
object! Cannot call sample()
. You can try setting create_env_on_driver
to True.
This is my training script:
from myEnvironment import myEnvironment
import ray
from ray.tune.registry import register_env
from ray.rllib.algorithms.dqn import DQN
ray.init()
register_env(“myEnv”, lambda config: myEnvironment())
env = myEnvironment()
config = {
‘environment’ : “myEnv”,
‘observation_space’ : env.observation_space,
‘action_space’ : env.action_space,
“framework”: “torch”,
“timesteps_per_iteration”: 1000,
“create_env_on_driver”: 4,
}
Create the RLlib agent
agent = DQN(config=config)
observation = env.reset()
for iteration in range(100):
result = agent.train()
# Print training progress
print(f"Iteration {iteration}: {result}")
# Save a checkpoint every 10 iterations
if iteration % 10 == 0:
checkpoint = agent.save()
print(f"Checkpoint saved at iteration {iteration}: {checkpoint}")
occupancy_space and action_space are implemented like this in my environment:
ccupancy_low = 0
occupancy_high = 1
speed_low = 0
speed_high = 37
speed_observation_space = spaces.Box(low=speed_low, high=speed_high, shape=(4,), dtype=np.float32)
occupancy_observation_space = spaces.Box(low=occupancy_low, high=occupancy_high, shape=(4,), dtype=np.float32)
bit_observation_space = spaces.Box(low=0, high=1, shape=(3,), dtype=np.float32)
observation_space = spaces.Tuple((speed_observation_space, occupancy_observation_space, bit_observation_space))
action_space = gym.spaces.Discrete(3)
class myEnvironment(MultiAgentEnv):
def init(self):
self.observation_space = spaces.Dict({
“agent_0”: observation_space,
“agent_1”: observation_space,
“agent_2”: observation_space,
“agent_3”: observation_space,
})
self.action_space = action_space
Is there anything I do wrong? Thank you in advance!