Basic RLlib session throws SystemExit error

Hi all, I’m getting back into reinforcement learning so I thought I’d try some basic algorithms again. However, I’m getting an error that I have never found before, and no one else seems to have dealt with it either. All I could find is a similar error that ray actor’s throw on purpose, but then it doesn’t make sense that an error that is thrown by design is not handled properly. The error occurs after training is complete, it seems, and the training results seem fine.

Here’s my code and a screenshot of the error:

analysis = tune.run(
    rllib.agents.ppo.PPOTrainer,
    name='trainingwheels',
    config={ 'env': 'CartPole-v0' },
    stop={
        'episode_reward_mean': 195,
        # fallback in case mean episode length does not go high enough
        'training_iteration': 15
    },
    local_dir='ray_results',
    verbose=1
)

Does anyone know how I can fix this?

Hey @RickDW , I’m with you, this is a confusing new error that surfaced a few weeks ago, but has nothing to do with RLlib (it’s a Ray Core/Tune issue).
Could you post this same question with the screenshot on the Ray Core and Tune forums?
It should not cause the program to actually crash, though. It just looks ugly :confused: . Sorry, not sure exactly either what’s causing this.

Hey, thanks for reaching out. I’ve posted it using the tune tag as well. I’d dive into the codebase myself if my schedule wasn’t swamped, but it’s not a very pressing issue either I guess since it doesn’t cause notebooks to crash.

Hi I am experiencing the same error. The interesting thing is this error happens with tune.run() but not trainer.train() for the same config.