I am seeing something interesting with regards to replicating the scoring process of an XGBoost GBM. I have used R’s XGBoost functionality to train and score a model and am trying to use the JSON dump of the model object to score the model outside of the XGBoost environment.
I wrote Python code to replicate the scoring process using the outputted JSON file and was able to achieve 100% match between XGBoost’s R model object and the XGBoost outputted JSON file. However, when I changed some of the hyperparameters my match rate dropped. The main parameter that I changed was the learning rate which combined with early stopping results in more trees for larger learning rates. Below are the match rates that I see between the scoring directly from XGBoost and with the JSON object scoring:
ETA (learning rate): .04 - .05 - .06 - .07 - .1 - .2
Match % (JSON vs RData): 100% - 99.9% - 99.3% - 98.9% - 96.7% - 94.8%
I am hypothesizing that there is a rounding issue that becomes apparent as the number of trees increases. I looked through some of the XGBoost source code and found the following piece of code which may be the culprit (https://github.com/dmlc/xgboost/blob/master/src/tree/tree_model.cc#L29-L35). But I can’t definitively say that this is the reason for the mismatch.
Does anyone have any ideas as to the reason for the change in match rates? Is it due to rounding or is there another reason for the discrepancy?
Appreciate any help that can be provided! Please let me know if I need to give any additional details.