How severe does this issue affect your experience of using Ray?
- Low: It annoys or frustrates me for a moment.
After being overwhelmed of the fine details between ray.train() and ray.tune(), I am now digging deeper in the documentation and taking time for step-by-step-examples to understand and learn.
Nevertheless, I think it would be very helpful for the official Tune FAQ to provide a formula approach how the minimum size of tune trials is dependent on the configs for training / tuning / trials.
As a starting point, I would like to following theoretically determined formula based on the documentation:
num_samples * matrix product of tune search space (e.g. 3*3=9) * train_batch_size
My currently running PPO trial without any tune search spaces proves me that this formula is non-exhaustive, or rather a first idea.
For sure, users can interfere with stopping criteria. But what if stopping criteria are not defined? Does is run endless in that case?
Happy to hear your feedback and share ideas on how it could like!