I am working using a RL code that implements algorithms via TensorForce, training by adding the experiences to the buffer by calling a method from the agent. whenever the buffer reaches the step size, it starts training.
A minimum example would be:
for i in range(timesteps): action = agent.predict(state) next_state, reward, done, _ = env.step(action) agent.add_to_buffer(state, action, reward, next_state, done)
Adding the experiences by hand is necessary in this case due to the particular structure of the environment. I am aware that RLLIB is the state-of-the-art library for RL. I wanted to change the agents from the TensorForce implementation to RLLIB. My question is, is there any way to create an equivalent code using RLLIB? I am referring to passing the experience to the agent externally, instead of training under the hood.