2.2.4 Model interpretation
Machine learning models can be difficult to interpret, but the SHapley
Additive explanations (SHAP) method, which is based on game theory and
was proposed by Lundberg, et al.(Lundberg & Lee, 2017), can help to
overcome this challenge by providing accurate explanations of the
model’s output. The SHAP method ranks the importance of each feature in
the input data based on its SHAP value, where a higher SHAP value
indicates a greater positive impact on the machine learning model, while
a lower SHAP value indicates a smaller impact or even a negative impact.