Is it possible to train on several different envs?

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

Hi! Does anybody know is it possible to train RLlib algorithm with several different environments?
Something like this is possible?

    envs = get_envs()
    algo = (DQNConfig()...)

    for e in envs:
        # set somehow e as an algo's env
        algo.train()