How to get environment object from algorithm object (PPO)

How can we get the environment object from an algorithm object in general?

There are a couple of threads about this, with no robust answer.

In the first thread the suggested solution is to use trainer.workers.local_worker().env where trainer = SomeAlgConfig(...).

This works for some algorithms, like DDPG (DDPGConfig), but not for others like PPO (PPOConfig).

Versions / Dependencies

Ray version 2.4.0
Python version 3.8

Reproduction script

Running this script I get:
ddpg env <TimeLimit<OrderEnforcing<PassiveEnvChecker<PendulumEnv<Pendulum-v1>>>>>
ppo env None

from ray.rllib.algorithms.ddpg import DDPGConfig
from ray.rllib.algorithms.ppo import PPOConfig

env_name = "Pendulum-v1"

# ---> DDPG <---
ddpg_config = DDPGConfig()
ddpg_trainer = ddpg_config.build(env_name)
print("ddpg env", ddpg_trainer.workers.local_worker().env)
# --------------

# ---> PPO <---
ppo_config = PPOConfig()
ppo_trainer = ppo_config.build(env_name)
print("ppo env", ppo_trainer.workers.local_worker().env)
# -------------

Issue Severity

High: It blocks me from completing my task.

Okay, found a solution I think. PPO requires the following line while DDPG does not.

alg_config.rollouts(create_env_on_local_worker=True)