My use case for tune/rllib requires custom input reader to read from a database, and custom callback functions. What’s the best way to add these custom python objects to trainer config?
Currently I’m using my own entrypoint script that parses a config file into dict, before passing it to tune.run_experiments. I was hoping to reuse the make_parser function to create a more versatile parser & one that’s more likely to be up to data as new things are added to rllib, but can’t figure out a way to both use the provided API and custom objects.
Right now, the only solution when you have custom classes that you need to specify in the Trainer’s config is to go through a python “setup” script, in which you would do:
Right now, you would not be able to do the above via a yaml config file and then just do rllib train -f [the yaml file]. Only some config keys (env, exploration_config) allow for custom classes to be specified by fully qualified class string, e.g. “ray.rllib.utils.exploration.epsilon_greedy.EspilonGreedy”, but callbacks and input readers do not support this right now. We are looking into replacing our config dicts with google GIN, which would unify this and make it easier to parse configs that have python classes/callables in them.
Thanks for the info!
Do you think in the mean time, arg parsing can be changed to return a dict / something that tune.run_experiments could use as an input parameter (let’s call it run_params)? create_trial_from_spec returns a Trial which I’m not able to run directly.
I’m hoping to:
have Ray common code to parse cmdline args & config file, return run_params
inject the class instances that cannot be expressed into run_params
hand run_params to tune for running
just don’t want to duplicate code for some common operations like specifying stop condition, loading from checkpoint, etc while still use custom classes