# What types of data do we collect?

Last updated

Last updated

System programmatically collects and computes the following evidence from each source of evidence:

Source | Information collected and computed |
---|---|

**How System determines “strength”**

Strength is an algorithm-agnostic measure of the magnitude of the effect implied by an association. System's methodology differs based on the type of the association.

For correlation-style associations (such as Pearson's R, or Kendall's Tau) we use commonly accepted community guidelines to bucket those associations into one of the five following categories:

For associations derived from predictive models, we use the evidence already on System to bin the value of a feature’s importance into one of the above buckets. The feature importance value (e.g. permutation score) combined with the performance of the model that the association was derived from (e.g. F1 score) is compared with similar associations on System.

**Examples**

Source | Source Type | Statistical Association Retrieved | Strength | Significance | Relationship on System |
---|---|---|---|---|---|

Peer-Reviewed Scientific Articles

System has fine-tuned multiple Large Language Models (LLM) with human-labeled scientific corpus to extract statistical relationships from text. At present, System supports more than 100 statistical associations and algorithms, including ratio types, differences and gains, correlations, and regression and other coefficients. System also calculates difference and gains from two reported group values (e.g., when given two means and a p-value, our models calculate and return a mean difference).

These relationships consist of:

A pair of variables (e.g. the intervention and outcome or independent and dependent variables)

A point estimation of the statistics (e.g. Odds Ratio or Hazard Ratio value)

A confidence interval and level

p-value

We have also layered in causal and mechanistic statements generated from rules-based models applied to scientific articles.

These relationships consist of:

A pair of agents (subject and object)

A statement type (e.g., activates, inhibits, phosphorylates)

Additionally, when available, System extracts various population characteristics for each study. For example:

Study population type (e.g., humans, rats)

Sample size

Age range

Sex

Location

Dataset

For each feature in a dataset, System computes feature statistics which vary depending on the data type of the feature. For example, for numerical features, System computes summary statistics and produces a histogram of the feature’s distribution. When features are “character” or “string” types, System computes statistics based on the string length so as to mask potential sources of personally identifiable information (PII) or protected health information (PHI).

For the relevant features in each dataset, System also computes pairwise correlations between features also depending on their data types.

When pairs of features are both numerical, System calculates Pearson

*r*correlation values and Kendall rank correlation values.

When pairs of features are both numerical and time series, System also calculates statistics on the data after differencing and detrending, and computes “lag” correlations between features (where one time series feature is related to lagged values of another time series feature). Appropriate lags are chosen according to the unit of the time series:

`{1 day: 30-unit lag, 7 days: 12-unit lag, 30 days: 6-unit lag, 365 days: 10-unit lag}`

When pairs of features are categorical, System calculates Cramer’s V association values.

When one feature in a pair is numerical and the other is categorical, System calculates Kruskal Wallis h-test values.

For datasets retrieved with the platform’s data warehouse integrations (Redshift, BigQuery, Snowflake), System currently computes only Pearson r correlation values (for numerical and time series features) and computes the relevant relationships between time series features.

Model

For each feature in a dataset used for training or testing a model, System computes feature statistics which vary depending on the data type of the feature. For example, for numerical features, System computes summary statistics and produces a histogram of the feature’s distribution. When features are “character” or “string” types, System computes statistics based on the string length so as to mask potential sources of personally identifiable information (PII) or protected health information (PHI).

For the relevant features in each dataset, System also computes pairwise correlations between features also depending on their data types.

When pairs of features are both numerical, System calculates Pearson

*r*correlation values and Kendall rank correlation values.

When pairs of features are both numerical and time series, System also calculates statistics on the data after differencing and detrending, and computes “lag” correlations between features (where one time series feature is related to lagged values of another time series feature, where the appropriate lags are chosen according to the unit of time).

When pairs of features are categorical, System calculates Cramer’s V association values.

When one feature in a pair is numerical and the other is categorical, System calculates Kruskal Wallis h-test values.

For datasets retrieved with the platform’s data warehouse integrations (Redshift, BigQuery, Snowflake), System currently computes only Pearson r correlation values (for numerical and time series features) and computes the relevant relationships between time series features. For models shared on System, the platform collects information about the model and computes performance metrics and feature importance values based on submitted “test” datasets (or “training” datasets, when also submitted).

For model objects, metadata about the statistical package used to train the model (e.g., scikit-learn, XGBoost, statsmodels, Tensorflow, etc.), the model specification (including any specified hyperparameters), and the names of features used to train the model are collected and presented on model pages.

Depending on the model type, relevant performance metrics are computed using the “test” sample provided. For classifier models, these include (but are not limited to) Accuracy, Precision, Recall, F1 score, ROC AUC, and confusion matrices. For regression models, these include R2 scores, and measures of prediction error (RMSE, etc.).

Additionally, feature importance scores are calculated. For each model performance metric, a “permutation score” is computed, where the contribution of each feature’s impact on the performance metric is calculated by measuring how random permutation of a feature’s value affects the performance metric. This effect is reported and features are assigned rank values based on the relative importance.

System also computes interpretability and explainability measures when appropriate. Model pages often include partial dependence plots (PDPs) alongside feature importance values to show the marginal impact of a feature on a model’s predicted values. Model pages will soon also include SHAP importance plots for features.

For models deployed “in production,” drift in both model performance and feature importance is tracked.

STRENGTH

PEARSON-R

KENDALL-TAU

CRAMER-V

EFFECT SIZE

Very Weak

[0, 0.1)

[0, 0.1)

[0, 0.05)

[0, 0.1)

Weak

[0.1, 0.3)

[0.1, 0.3)

[0.05, 0.1)

[0.1, 0.3)

Medium

[0.3, 0.6)

[0.3, 0.6)

[0.1, 0.15)

[0.3, 0.6)

Strong

[0.6, 0.9)

[0.6, 0.9)

[0.15, 0.25)

[0.6, 0.9)

Very Strong

[0.9, 1]

[0.9, 1]

[0.25, …)

[0.9, 1]

STRENGTH

REGRESSORS (R2 SCORE * PERMUTATION SCORE)

CLASSIFIERS (F1 SCORE * PERMUTATION SCORE)

Very Weak

[0, 0.1) of max on System

[0, 0.1) of max on System

Weak

[0.1, 0.3) of max on System

[0.1, 0.3) of max on System

Medium

[0.3, 0.6) of max on System

[0.3, 0.6) of max on System

Strong

[0.6, 0.9) of max on System

[0.6, 0.9) of max on System

Very Strong

[0.9, 1] of max on System

[0.9, 1] of max on System

Dataset

PEARSON R: 0.983 between

`primary_school_life_expectancy_years`

and

`primary_school_completion_rate_female`

Very Strong

P < 0.001

Dataset

Max PEARSON R (when one feature is lagged): 0.867 between

`Confirmed Cases Of COVID-19`

and

`Deaths From COVID-19`

+ 23 lag

Strong

P < 0.001

Model

R2 Permutation Score: 0.225 between

`Two_Week_Prior_Weekly_Deaths`

and

`Weekly_Deaths`

Very Strong

P < 0.001

Paper

Adjusted Odds Ratio: 1.94 between

`Individual Is A Lifetime Cigarette Smoker`

and

`Individual Consumes Caffeinated Coffee`

Very Strong

P = 0.003