Models are funny things. You can use all manner of calculus, statistics, and even linear algebra to shape and reshape data into something unrecognizable but still have a way to make predictions with certain accuracies. In an effort to make sense of the model I ran across something even my instructor had not seen. SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It can also break down a prediction to show the impact of each feature. This leads to some amazing visualizations. After this boot camp, I will need to explore modeling with SHAP.
More information and visualization examples in [articles] like this (https://towardsdatascience.com/explain-any-models-with-the-shap-values-use-the-kernelexplainer-79de9464897a). I’ve heard there’s even a YouTube video for those interested in learning SHAP.