Parallel workers reporting to tensorboard

I have implemented a 2-DQN Policy for multi-objective RL using Ray core and I would like to log the rewards, pareto-front, custom metrics, loss functions and other such information through tensorboard. How does RLlib combines the data from parallel workers to log to the tensorboard? What are the best practices I can follow to implement the tensorboard logging for my 2-DQN policy? Is there some documentation or tutorial I can refer especifically to implement RLlib style tensorboard logging for my custom code written using Ray Core?