Adding Custom ClearML Logger Callbacks option through config.yaml file

What is the best way to add a custom logger to the config.yaml file to be consumed by the for the rllib train file config.yaml command

I am using a custom logger called ClearMLLogger

So, assuming I start with an example in the tune_examples folder, such as cartpole-ppo.yaml and I want to use my defined ClearMLLogger class from the config file

Example Loggers available look like there are already integrations for them
https://docs.ray.io/en/latest/tune/api/logging.html

Can you provide the following information?

  1. An example of how the callbacks section of my cartpole.yaml should look to utilize the ClearMLLogger class I defined.
  2. Another example for the current supported callbacks e.g. MLFlow also in the cartpole-ppo.yaml (Even with the available loggers, I dont get an idea how to set them up from the config.yaml file.)

Here is my clearml_logger file

clearml_logger.py

from typing import Dict, List
import os
import json
from clearml import Task, Logger
from ray.tune.logger import LoggerCallback

class ClearMLLogger(LoggerCallback):
“”“Custom ClearML logger interface”“”

def __init__(self, project_name: str, task_name: str, auto_connect_frameworks: Dict):
    self._trial_tasks = {}
    self._project_name = project_name
    self._task_name = task_name
    self._auto_connect_frameworks = auto_connect_frameworks

def log_trial_start(self, trial: "Trial"):
    task = Task.init(
        project_name=self._project_name,
        task_name=f"{self._task_name}_{trial.trial_id}",
        auto_connect_frameworks=self._auto_connect_frameworks
    )
    self._trial_tasks[trial] = task

def log_trial_result(self, iteration: int, trial: "Trial", result: Dict):
    if trial in self._trial_tasks:
        task = self._trial_tasks[trial]
        logger = task.get_logger()
        for key, value in result.items():
            if isinstance(value, (int, float)):
                logger.report_scalar(title=key, series="result", value=value, iteration=iteration)

def on_trial_complete(self, iteration: int, trials: List["Trial"], trial: "Trial", **info):
    if trial in self._trial_tasks:
        task = self._trial_tasks[trial]
        task.close()
        del self._trial_tasks[trial]

Lets start with the cart pole-ppo.,yaml: ray/rllib/tuned_examples/ppo/cartpole-ppo.yaml at master · ray-project/ray · GitHub
Which works for me.

My updated version to use callbacks (used the 0s to specify the number of indents for each line)
cartpole-ppo:
0 env: CartPole-v1
0 run: PPO
0 stop:
00 sampler_results/episode_reward_mean: 150
00 timesteps_total: 100000
0 config:
# Works for both torch and tf2.
00 framework: torch
00 gamma: 0.99
00 lr: 0.0003
00 num_workers: 1
00 num_sgd_iter: 6
00 vf_loss_coeff: 0.01
00 model:
000 fcnet_hiddens: [32]
000 fcnet_activation: linear
000 vf_share_layers: true
00 callbacks: clearml_logger.ClearMLLogger
00 project_name: “ClearML Project Name”
00 task_name: “Clearml Task name”
00 auto_connect_frameworks:
000 tensorboard: true
000 tfdefines: false

Error Summary:
ValueError: Could not deserialize the given classpath module=clearml_logger.ClearMLLogger into a valid python class! Make sure you have all necessary pip packages installed and all custom
modules are in your PYTHONPATH env variable.

Also: Note that the rllib train —help shows that —config should be a json file, but that does not work really