I am using Tune and the Trainable Class API to parallelise computations belonging to one and the same experiment. There is a fixed number of runs that are executed. Per default the Tune logger logs each run as a separate folder. This is fine, but I would like for all the runs to have a pointer to one single mlflow experiment and run. In each run there is one file produced and I would like to put them all into a single artifacts uri (folder) of a single experiment and run. If I execute 100 runs, a 100 files should be put in the same artefacts folder. To note is that all workers have access to the same local file storage.
I am not sure how to use mlflow_mixin to achieve this. If I am using the trainable functional API and the @mlflow_mixin decorator it seems that the MLflowTrainableMixin class is instantiated, but how does one work with mlflow_mixin together with the trainable Class API? I can add the decorator to any function but when printing the mlflow tracking uri it will point to the path of the current Tune run. Seems that the mlflow parameter that is fed into the configuration dictionary is not used.
Have not found any example of using mlflow together with trainable Class API.
Seems an option is to use mlflow directly or the MLflowLoggerUtil class to help out.
The documentation states:
This Ray Tune Trainable mixin helps initialize the MLflow API for use with the
Trainable
class or the@mlflow_mixin
function API.
What am I missing?
How severe does this issue affect your experience of using Ray?
- Medium: It contributes to significant difficulty to complete my task, but I can work around it.