Getting RLlib running with custom environments on Jupyter

How severe does this issue affect your experience of using Ray?

  • Low: It annoys or frustrates me for a moment.

Hey all, might be simple but I was just having problems getting rllib running with custom gym environments on Jupyter. I think the problem might be that I don’t understand how to make the config variable for custom envs but I’m not sure where the documentation for that is. See the screenshots below to see what I’m trying to do (the env is just used for testing) and what error I’m running in to.

(RLlib on Jupyter - Album on Imgur)

You could check out this tutorial for first steps with custom env: https://github.com/sven1977/rllib_tutorials/blob/main/ray_summit_2021/tutorial_notebook.ipynb. They use a multi agent environment but apart from that all the steps should be the same for you: 1. config file, 2. instantiate trainer, 3. train.