and get the error message Could not find best trial. Did you pass the correct metric parameter?. However, if I restore the experiment with Tuner.restore, can access the mrr metric.
Why is the ExperimentAnalysis not working as expected?
thank you very much for your help. I have tried out your code and got the same error: RuntimeError: No best trial found for the given metric: mrr. This means that no trial has reported this metric, or all values reported for this metric are NaN. To not ignore NaN values, you can set the `filter_nan_and_inf` arg to False.
The RUN_NAME is the path to the Ray experiment (‘’/media/compute/homes/mblum/ray_results/feature_vs_objective_link_pred_2023-02-23_10-36-24"). However, the path must be correct because the restoring seems to work correctly. Moreover, if I call results._experiment_analysis._trial_dataframes, the dataframes contain the mrr score.
@mblum The issue seems to be that your metric is passed to Tune as a tensor object, rather than a float. Could you try reporting mrr.item() (if you’re using PyTorch) instead?
Got it, you could also find the best result through the dataframe directly if you don’t want to re-run the experiments. Generally, you should report primitive types through session.report(metrics), but it technically does allow all sorts of serializable objects to be reported.
So, it’s hard to throw a warning during logging. We may want to add a try/catch to convert to a primitive type, when using it to order trials.
Thank you very much @justinvyu ! I will follow your suggestion.
You are right, some warning or automatic type conversion would be great. However, making this more clear in the documentation would actually help a lot in the future.