HalfCheetah isnot working withMBMPO

To run HalfCheetah env with the algorithm MBMPO a wrapper is needed. The wrapper provided in ray.rllib.examples.env.mbmpo_env.HalfCheetahWrapper is for HalfCheetah-v2 and gives errors. I made a wrapper to workaround this issue source file but now I don’t get any result files for tensorboard. I want to visualise graphs of this algo on this env.

The first few warning in the log is as follows:

|(MBMPO pid=1367166) 2023-04-22 15:08:12,739|WARNING algorithm_config.py:596 -- Cannot create MBMPOConfig from given `config_dict`! Property __stdout_file__ not supported.
|(MBMPO pid=1367166) 2023-04-22 15:08:12,871|INFO algorithm.py:506 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
|(RolloutWorker pid=1367231) 2023-04-22 15:08:17,307|WARNING env.py:156 -- Your env doesn't have a .spec.max_episode_steps attribute. Your horizon will default to infinity, and your environment will not be reset.
|(RolloutWorker pid=1367231) 2023-04-22 15:08:17,308|WARNING env.py:166 -- Your env reset() method appears to take 'seed' or 'return_info' arguments. Note that these are not yet supported in RLlib. Seeding will take place using 'env.seed()' and the info dict will not be returned from reset.

What more happens?
You run a training but tensorboard stays empty?
Where are the files written to? Have you checked them?