Equity and Accuracy in Cardiovascular Disease Risk Prediction: Evaluating Machine Learning Models Across Racial and Gender Groups

Ramon Ledwidge*

Department of Cardiology, Nagoya University, Nagoya, Japan

*Corresponding Author:
Ramon Ledwidge
Department of Cardiology, Nagoya University, Nagoya,
Japan,
E-mail: shengbo@gmail.com

Received date: May 13, 2024, Manuscript No. IPJHCR-24-19371; Editor assigned date: May 16, 2024, PreQC No. IPJHCR-24-19371 (PQ); Reviewed date: May 30, 2024, QC No. IPJHCR-24-19371; Revised date: June 06, 2024, Manuscript No. IPJHCR-24-19371 (R); Published date: June 13, 2024, DOI: 10.36648/2576-1455.8.2.72

Citation: Ledwidge R (2024) Equity and Accuracy in Cardiovascular Disease Risk Prediction: Evaluating Machine Learning Models Across Racial and Gender Groups. J Heart Cardiovasc Res Vol.8 No.2: 72.

Visit for more related articles at Journal of Heart and Cardiovascular Research

Description

The use of predictive models in risk assessment is a key component of precision medicine, which can be used to identify patients for early prevention. Machine Learning (ML) has been increasingly used to learn from massive and complex health data, such as Electronic Health Records (EHR), to make clinical decisions such as diagnosis, prediction of adverse events and treatment recommendations. In practice, however, the dataset used to train models may contain systematic bias, such as sampling bias (e.g., underrepresentation of a sub cohort), differential deficiency or statistical estimation errors through pooling and preprocessing. It remains uncertain whether ML models trained with such information reinforce biases and make judgments about certain groups of people (e.g., age, gender and race). Disparities created by clinical prediction models would affect health equity and portability.

Ml-based models

Assessing the bias and of ML models has attracted much attention in the machine learning and statistical community. Researchers have proposed methods to assess and mitigate bias in a variety of applications that may negatively affect under represented groups, such as predicting recidivism, predicting credit risk and predicting income. However, systematic studies on biases in clinical prediction models are scarce because real health data are not widely available and the causal structures of high-dimensional health data need to be better understood. Measures and methods aimed at identifying and mitigating bias in clinical settings need to be explored. Because cardiovascular disease is the leading cause of death in the United States and worldwide, early detection and prevention are critical to prolonging life and reducing mortality, disability and costs. In this study, we examined the of ML-based models for predicting Cardiovascular Disease (CVD) across racial and gender groups and compared it to the widely used American Heart Association (AHA) pooled cohort risk equation. In addition, we tested several bias reduction methods to evaluate the effectiveness of bias reduction and the effect on accuracy. The aim of the study was to understand the importance of bias detection and assessment of ML-based models, evaluate metrics to quantify the of ML-based clinical prediction models and implement methods to mitigate ML bias. models The metrics and approaches were not limited to predicting cardiovascular disease, but could be extended to other diseases as well. CVD is a complex disease with several known risk factors that develop over time. Studies have shown that ML models using longitudinal EHR data improved the accuracy of predicting 10-year CVD risk in early intervention.

Health disparities

The use of predictive models in clinical practice to identify patients at high risk of adverse events is becoming increasingly common. In order to provide preventive care that minimizes health disparities, it is critical that these models provide fair and accurate predictions. Biased assessments may result in some individuals missing early intervention or prognosis, further exacerbating existing health disparities. Our study had several strengths. We evaluated biases in several clinical models, including a classic clinical tool (non-ML model), ML models and a deep learning model. We also externally evaluated the ML models in a nationally representative cohort all of us and evaluated three biased methods. This is one of the first studies to comprehensively examine fair demographic subgroups in CVD prediction models and provides a generalizable framework for examining biases in other disease prediction models.

Select your language of interest to view the total content in your interested language

Viewing options

Flyer image

Share This Article