Resuming Trials with New Checkpoint_Score_Attr / Best Metric

Hi, I am having some issues resuming trials in ray tune.

I want to resume one specific trial for an experiment and this trial that I want to resume is the most recent one I have run in my experiment. I have a specific checkpoint that I want to resume, which I have specified in tune.run(restore=‘path_to_checkpoint’). To resume, I have tried setting both tune.run(resume=True) and tune.run(resume=‘LOCAL’).

My problem is that on first glance, what I have done seems to work as my training trial is set to resume from the last epoch completed, e.g. epoch 27, equal to the current best trial. However, for some reason, another trial is loaded up and seems to override the previous one, with a new current best trial set to some other trial I do not desire, e.g. epoch 2. I have a few questions regarding this:

Q1. Can I specify which epoch the ray tune tensorboard logging will resume from?
Q2. Is my problem only that the tensorboard logging starts from the wrong epoch or that it actually loaded a wrong checkpoint? Wrong meaning that it is not the checkpoint I specified in tune.run(restore=‘path_to_checkpoint’).
Q3. Though I do not want to use this functionality at the moment and simply resume my experiment from the last checkpoint, I may want to resume my experiment from some ‘best_metric’ at some point, eg. epoch 2. Currently, I notice that the incorrectly loaded trial details use a ‘current_best_trial’ that is different from the initial correctly loaded trial details, e.g. epoch 27. Is there a way to overwrite experiment details such as the ‘checkpoint_score_attr’ and best metric to use?
Q4. Upon resuming an experiment, I notice that the trainable tune.run function creates a new ‘tmp_checkpoint’ directory in the trial directory and also in the trainable returns a ‘checkpoint_dir’ that is not that of what I have specified in tune.run(restore=‘path_to_checkpoint’). What is this supposed to mean?

Thank you!

Alison