Interpreting error in XGboost example

In the XGBoost classifier example here the accuracy is calculated as:
accuracy = 1. - results["eval"]["error"][-1]

So, the last (-1) element in the list ['error'] is being selected as the final error. There are 10 elements in the list. What do these 10 elements represent? I assume the last one is taken because there are 10 iterations (perhaps) and the last element represents the final iteration. Where can I find documentation to help interpret what is happening here? If there are 10 ‘iterations’, does each represent the error after a 10th of the training has occurred?

That is correct, each element in the list is an iteration. A new tree is added every iteration, so the error of the final model with 10 trees is the last element of the results list.

By default xgboost.train function uses 10 trees (iterations), controlled by the num_boosted_trees argument - Python API Reference — xgboost 1.5.2 documentation

Ah 10 because 10 trees thank you!!!

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.