Down to the present day, I’d thought that ray.init() is mandatory in a script to use RLlib (i.e. define a Trainer to learn policies of some agents etc.).
Now, as far as I can see it also works w/o including ray.init()
If so, what are all the benefits of including ray.init() in a script? Has it something to do with distributed workflow, parallelism, scaling to large tasks…?
Would be really glad about some “enlightenment”
Hi @klausk55. If you don’t call ray.init(), RLlib implicitly calls it for you. The benefit to including ray.init() explicitly in your script is that you can pass any arguments/options that you’d like to customize Ray’s behavior. In the majority of use cases, you shouldn’t need to do this.