| For each feature in a dataset, System computes feature statistics which vary depending on the data type of the feature. For example, for numerical features, System computes summary statistics and produces a histogram of the feature’s distribution. When features are “character” or “string” types, System computes statistics based on the string length so as to mask potential sources of personally identifiable information (PII) or protected health information (PHI).
For the relevant features in each dataset, System also computes pairwise correlations between features also depending on their data types.
When pairs of features are both numerical, System calculates Pearson r correlation values and Kendall rank correlation values.
When pairs of features are both numerical and time series, System also calculates statistics on the data after differencing and detrending, and computes “lag” correlations between features (where one time series feature is related to lagged values of another time series feature). Appropriate lags are chosen according to the unit of the time series: {1 day: 30-unit lag, 7 days: 12-unit lag, 30 days: 6-unit lag, 365 days: 10-unit lag}
When pairs of features are categorical, System calculates Cramer’s V association values.
When one feature in a pair is numerical and the other is categorical, System calculates Kruskal Wallis h-test values.
For datasets retrieved with the platform’s data warehouse integrations (Redshift, BigQuery, Snowflake), System currently computes only Pearson r correlation values (for numerical and time series features) and computes the relevant relationships between time series features.
|
| For each feature in a dataset used for training or testing a model, System computes feature statistics which vary depending on the data type of the feature. For example, for numerical features, System computes summary statistics and produces a histogram of the feature’s distribution. When features are “character” or “string” types, System computes statistics based on the string length so as to mask potential sources of personally identifiable information (PII) or protected health information (PHI).
For the relevant features in each dataset, System also computes pairwise correlations between features also depending on their data types.
When pairs of features are both numerical, System calculates Pearson r correlation values and Kendall rank correlation values.
When pairs of features are both numerical and time series, System also calculates statistics on the data after differencing and detrending, and computes “lag” correlations between features (where one time series feature is related to lagged values of another time series feature, where the appropriate lags are chosen according to the unit of time).
When pairs of features are categorical, System calculates Cramer’s V association values.
When one feature in a pair is numerical and the other is categorical, System calculates Kruskal Wallis h-test values.
For datasets retrieved with the platform’s data warehouse integrations (Redshift, BigQuery, Snowflake), System currently computes only Pearson r correlation values (for numerical and time series features) and computes the relevant relationships between time series features. For models shared on System, the platform collects information about the model and computes performance metrics and feature importance values based on submitted “test” datasets (or “training” datasets, when also submitted).
For model objects, metadata about the statistical package used to train the model (e.g., scikit-learn, XGBoost, statsmodels, Tensorflow, etc.), the model specification (including any specified hyperparameters), and the names of features used to train the model are collected and presented on model pages.
Depending on the model type, relevant performance metrics are computed using the “test” sample provided. For classifier models, these include (but are not limited to) Accuracy, Precision, Recall, F1 score, ROC AUC, and confusion matrices. For regression models, these include R2 scores, and measures of prediction error (RMSE, etc.).
Additionally, feature importance scores are calculated. For each model performance metric, a “permutation score” is computed, where the contribution of each feature’s impact on the performance metric is calculated by measuring how random permutation of a feature’s value affects the performance metric. This effect is reported and features are assigned rank values based on the relative importance.
System also computes interpretability and explainability measures when appropriate. Model pages often include partial dependence plots (PDPs) alongside feature importance values to show the marginal impact of a feature on a model’s predicted values. Model pages will soon also include SHAP importance plots for features.
For models deployed “in production,” drift in both model performance and feature importance is tracked.
|