How severe does this issue affect your experience of using Ray?
- Medium: It contributes to significant difficulty to complete my task, but I can work around it.
First off, sorry if my question is basic (I’m new to rllib), but I can’t seem to get things printed to my console when using rllib. I have copied some code from the example custom environment ray/custom_env.py at master · ray-project/ray · GitHub
and while my training is running just fine, I only receive training summary output. None of the print statements that I insert result in any output to console. Even the pretty_print() statement that came in the example code doesn’t seem to do anything. I feel like if I understood what was happening with the print statements, I would understand what might be happening with some of my other problems.
For an example of what I am talking about, no “printed” statements occur from the below code (also, idk how to fix the indentation for this, but it is similar to the link I posted):
if args.no_tune:
# manual training with train loop using PPO and fixed learning rate
if args.run != “PPO”:
raise ValueError(“Only support --run PPO with --no-tune.”)
print(“Running manual train loop without Ray Tune.”)
ppo_config = ppo.DEFAULT_CONFIG.copy()
ppo_config.update(config)
# use fixed learning rate instead of grid search (needs tune)
ppo_config[“lr”] = 1e-3
trainer = ppo.PPO(config=ppo_config, env=dot_environment.dot_environment)
# run manual training loop and print results after each iteration
for _ in range(args.stop_iters):
print(“printed”)
result = trainer.train()
print(“printed”)
print(pretty_print(result))
print(“printed”)
# stop training of the target train steps or reward are reached
if (
result[“timesteps_total”] >= args.stop_timesteps
or result[“episode_reward_mean”] >= args.stop_reward
):
break