Keras callbacks - training or validation accuracy

Hello
I wounder about some piece of example code from: https://docs.ray.io/en/latest/tune/examples/tune_mnist_keras.html

model.fit(
    x_train,
    y_train,
    batch_size=batch_size,
    epochs=epochs,
    verbose=0,
    validation_data=(x_test, y_test),
    callbacks=[TuneReportCallback({
        "mean_accuracy": "accuracy"
    })])

Is the “mean_accuracy”: “accuracy” training or validation accuracy? How to change it?
In typical keras usage are: ‘acc’, ‘val_acc’, ‘loss’ and ‘val_loss’.

Regards Peter

I think this might be an example from a different version of Keras (or TF2?). I think you can just change it to:

TuneReportCallback({"accuracy": "acc"})

Hi @Peter_Pirog “accuracy” here is referring to training accuracy. This metric is specified in the model.compile(…) line in the example right before we call model.fit(…).

According to the tensorflow docs, passing in either “accuracy” or “acc” to the metrics field in model.compile should have the same effect.

When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy , tf.keras.metrics.CategoricalAccuracy , tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape.

Thank You for the answers. Do you think is possible to use validation accuracy as metric in tune.run() function? Thank you for info, how to change type of accuracy metric.

Peter

I believe Keras automatically tracks validation metrics by just pretending “val” to the metric name. So you would first have to specify “val_accuracy” or “val_acc” in the TuneReportCallback that gets passed into model.fit: TuneReportCallback(“val_accuracy”).

This will cause the validation accuracy to get reported to Tune. And then when you call tune.run, specify this as the metric argument: tune.run(..., metric=“val_accuracy”).

1 Like

Thank you for suggestion. I willa try it.

Peter

śr., 30 gru 2020, 02:32 użytkownik Amog Kamsetty via Ray <ray@discoursemail.com> napisał:

@amogkam your suggesstion works fine. I improve my code to:

    model.fit(
        x_train,
        y_train,
        batch_size=batch_size,
        epochs=epochs,
        verbose=0,
        validation_data=(x_test, y_test),
        callbacks=[TuneReportCallback({
            "mean_accuracy": "val_accuracy" #optional values ['loss', 'accuracy', 'val_loss', 'val_accuracy']
        })])

@amogkam @rliaw Now I try to define own custom metris and add it to the tuning.

I defined function:

 def rmsle(y_pred, y_test) :
return tf.math.sqrt(tf.reduce_mean((tf.math.log1p(y_pred) - tf.math.log1p(y_test))**2))

model compile:

model.compile(
    loss=rmsle,#mean_squared_logarithmic_error "mse"
    optimizer=tf.keras.optimizers.Adam(lr=config["lr"]),
    metrics=['error',rmsle])

fit model:

model.fit(
X_train,
y_train,
batch_size=batch_size,
epochs=epochs,
verbose=0,
validation_data=(X_test, y_test),
callbacks=[TuneReportCallback({
“error”: “val_error”
})])
and
analysis = tune.run(

metric=“error”,…

the error is:

report_dict[key] = logs[metric]

KeyError: ‘val_error’

I will be grateful for any suggestions.

Maybe try:

callbacks=[TuneReportCallback({
“error”: “error”
})])

?