Accessing the memory buffer dqn

Hi,
When using a dqn agents (or any other relevant algorithms for that matter) - is there a way that I can manipulate the agent’s memory buffer during the training?

*By manipulation I intend to adding or removing transitions from/to the buffer.

Hi @Ofir_Abu ,

this manipulation is certainly not trivial. As you can see in the source code, the ReplayBuffer holds in _storage a list of SampleBatches that contain the experiences from the environment. This means, you either need to add (by using add() or remove SampleBatches directly from this buffer.

Hi, I am interested in learning how to customize policies/models by reading DQN’s code (because the official RLlib documentation is really hard to follow). However, I feel pretty confused when reading it.

Do you have any suggestions on where I should start to read?
Should I have a strong TensorFlow or PyTorch background?

@Roller, could you start a new topic?

Yes. Sorry. I have started a new topic. Here is the link, if you are interested in it.