You could also do something like this. I don’t know if it counts as “a better way” but it would work. I do not think that updating after_init on the Trainer will step on any other callbacks because I think those are defined in the policies rather than the trainer but @sven1977 would know better.
from ray.tune.registry import register_trainable
from ray.rllib.agents.ppo import PPOTrainer
ModelPrintPPOTrainer = PPOTrainer.with_updates(after_init=lambda trainer: trainer.get_policy().model.base_model.summary())
register_trainable("ModelPrintPPOTrainer", ModelPrintPPOTrainer)
tune.run("ModelPrintPPOTrainer",...)