Converstion to Ray 2.0

How severe does this issue affect your experience of using Ray?

  • Low: It annoys or frustrates me for a moment.

QUESTION 1:

My question is on how to run RLLIB now. I want to train my MARL agents using PPO and I have seen two ways on the examples:

What is the difference between:
tuner = tune.Tuner( args.run, param_space=config, run_config=air.RunConfig(stop=stop) )
results = tuner.fit()

and

tune.run(**exp_dict)

QUESTION 2:
Where should I set these ones now in Ray 2.0? (before I believe they were just params of the config_dict:

framework=‘tf2’
eager_tracing=True
‘log_level’: ‘INFO’

Thanks!

Hi @Username1 ,

Question 1: From RLlib’s perspective, there is no difference. We (the Ray libraries) are consolidating top level APIs and, in the long run, would like to match the common “.fit()” syntax you see in many ML tools.

Question 2: They have not changed their place in the config_dict. But the way you construct a config_dict is now with the config objects. This is still being updated in our examples and many other places. Have a look at the current RLlib in 60 seconds example -


Just make sure you are look at the “master” version (in the bottom right corner) and not “latest”

Cheers

1 Like