Multi_gpu_impl.py: Divided "X" rollout sequences, each of length _, among _ devices

Hello, I am very perplexed by this info log:
(pid=141) 2021-07-08 15:31:05,240 INFO multi_gpu_impl.py:188 -- Divided 40800 rollout sequences, each of length 1, among 1 devices.

How are these “rollout sequences” size calculated? I did go through the source code but there’s too much abstraction for me to make it clear for myself.

How would I calculate this from the relevant config:

train_batch_size : 1024
rollout_fragment_length: 256
sgd_minibatch_size: 128
num_sgd_iter: 30
max_seq_len: 20 (I thought this was used only for LSTMs but there was mention of it in the source code?)

Am I missing something that could be used to calculate this?