Ray Arcade Learning Environment not Recognized by Ray

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

For my task I want to use the the Arcade Learning Environment (ALE) to run some experiments. Unfortunately, the simple example below does not work and Ray throws the following error:

ray.rllib.utils.error.EnvError: The env string you provided ('ALE/Pong-v5') is:
a) Not a supported/installed environment.
b) Not a tune-registered environment creator.
c) Not a valid env class string.

Try one of the following:
a) For Atari support: `pip install gym[atari] autorom[accept-rom-license]`.
   For VizDoom support: Install VizDoom
   (https://github.com/mwydmuch/ViZDoom/blob/master/doc/Building.md) and
   `pip install vizdoomgym`.
   For PyBullet support: `pip install pybullet`.
b) To register your custom env, do `from ray import tune;
   tune.register('[name]', lambda cfg: [return env obj from here using cfg])`.
   Then in your config, do `config['env'] = [name]`.
c) Make sure you provide a fully qualified classpath, e.g.:
   `ray.rllib.examples.env.repeat_after_me_env.RepeatAfterMeEnv`

I followed the instructions without success. Here is the minimal working example:

from ray.rllib.algorithms.appo import APPOConfig

config = (  # 1. Configure the algorithm,
    APPOConfig()
    # .environment("Taxi-v3")  # Works.
    .environment("ALE/Pong-v5")  # Doen't work.
    .rollouts(num_rollout_workers=2)
    .framework("torch")
    .training(model={"fcnet_hiddens": [64, 64]})
    .evaluation(evaluation_num_workers=1)
)

algo = config.build()

for _ in range(5):
    print(algo.train())

algo.evaluate()

Could you please provide guidance on how to fix the problem so that Ray can detect the environment? Thank you a lot!