When run PPO,it can not calculate episode reward

hi,there. when i use ray to run my custom env_PPOConfig,it show below:

(RolloutWorker pid=1954982) 2023-08-17 11:19:07,224     WARNING env.py:162 -- Your env doesn't have a .spec.max_episode_steps attribute. Your horizon will default to infinity, and your environment will not be reset.

and this will lead to episode_reward_mean = nan ,
And i also see similar question ,but don’t know how to set .spec.max_episode_steps , and the new version 2.6.3 has no attribute horizon, i don’t know how to deal this problem . Wish anyone do me a favour .