Quantcast
Channel: Active questions tagged r - Stack Overflow
Viewing all articles
Browse latest Browse all 201839

interpretation of plot_features output in LIME- supports vs contradicts output doesn't match the sign of feature_value * feature_weight

$
0
0

I've created a GBM model explanation using LIME and have used plot_features to plot the results. I'm confused by the mismatch between the output of the plot, in terms of Supporting and Contradicting for each feature, and the sign of feature_value * feature_weight in the explanation. I'm using "lasso_path" for feature_select parameter of LIME's explain function. The problem at hand is a classification. I'm using "lasso_path" for feature_select parameter of LIME's explain function. Here is one case example explanation output:

require(data.table)
require(lime)
explanation <- data.table(case = 1, model_type = 'classification', label_prob = 0.9040779, model_r2 = 0.09753976, model_intercept = 0.243325, model_prediction = 0.6457707, feature = LETTERS[1:10], feature_value = c(0.64588920, -0.17475573, -0.09798112, -0.04044490, -0.19259066, 1.46205926, -0.23087399, 1.24244215, 0.37272282, 0.57942359), feature_weight = c(0.18008253, -0.16147525, 0.15254990, 0.12139500, -0.11798355, 0.10004934, 0.07455384, 0.05987285, -0.05696310, 0.05036413), feature_desc = paste0("some description for ", LETTERS[1:10]) )

plot_features(explanation)

enter image description here

Why does feature B contradict the model, despite the fact that its feature_value * feature_ weight is positive? The value of feature_value * feature_weight for feature A is positive too, but feature A supports the model prediction. Looking at the intercept doesn't help me solve this either. Inversely, feature G supports the model prediction, despite its feature_value * feature_weight being negative. There must be something wrong with the way I understand the LIME explanation output. I realise that the sign of feature_weight defines the support/contradiction. But shouldn't it be the sign of feature_value * feature_weight instead? in this example, the B feature_value is negative and the its weight is negative. So, technically, in the regression problem, feature B adds 0.02821873 (-0.17475573 * -0.16147525) to the model intercept to get the predicted probability.


Viewing all articles
Browse latest Browse all 201839

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>