Improving Accuracy and Reducing Bias in Hospital AI Predictive Models

January 27, 2025
Improving Accuracy and Reducing Bias in Hospital AI Predictive Models

The increasing reliance on artificial intelligence (AI) and predictive models integrated with electronic health record (EHR) systems among American hospitals has brought both benefits and challenges to patient care. With 65% of U.S. hospitals using these advanced technologies, the focus is shifting to how accurately and fairly these tools operate. However, a new issue has emerged: many hospitals are not adequately assessing these models for accuracy or bias, raising significant concerns about their impact on healthcare outcomes.

Practical Application of Predictive Models

Predictive models play a crucial role in various clinical and administrative tasks within hospitals. These models can predict patient health risks, helping healthcare professionals to identify high-risk outpatients and manage inpatient health trajectories effectively. They also streamline administrative operations, such as scheduling appointments and optimizing staff workloads. Despite their advantages, there is a worrying trend where many hospitals do not properly evaluate these tools for potential inaccuracies and biases, which could adversely affect patient care.

Evaluating Accuracy and Bias

Surveys reveal that only 61% of hospitals using AI and predictive models assess their accuracy with their own data. Even more concerning is that only 44% of these institutions evaluate these models for bias. This significant gap in local scrutiny indicates that many hospitals might be deploying these technologies without fully understanding their limitations or potential pitfalls. Accurate evaluation is essential to ensure that predictive models contribute positively to patient outcomes without introducing or exacerbating biases.

Regulatory Focus and FDA Guidelines

The growing integration of AI in healthcare has prompted increased regulatory attention. The Food and Drug Administration (FDA) has issued draft guidance to enhance transparency and address biases within AI-enabled devices. This move aims to standardize the information required for premarket submissions, ensuring that manufacturers disclose their validation methods and approaches to reducing bias. Additionally, a final rule by the Assistant Secretary for Technology Policy mandates that health IT companies detail their bias mitigation strategies for decision support tools, including predictive models not classified as medical devices.

Usage Patterns and Concerns

Most hospitals rely on predictive models to determine inpatient health trajectories (92%) and pinpoint high-risk outpatients (79%). Administrative uses, such as scheduling, are also common (51%). Interestingly, hospitals that develop their own predictive models are more likely to evaluate them for accuracy and bias compared to those that utilize externally developed tools. This discrepancy highlights the need for more rigorous evaluation across the board, regardless of the model’s origin.

Health Inequities and Misconceptions

It’s alarming that 56% of hospitals using predictive models do not assess them for bias, risking the propagation of existing health inequities. Another issue is the common misconception that administrative tools carry less risk than clinical tools. Studies have shown that patients are uncomfortable with models predicting non-clinical outcomes, such as bill payment or missed appointments, which can influence the level of care they receive.

Supporting Independent Hospitals

The growing use of artificial intelligence (AI) and predictive models within electronic health record (EHR) systems across American hospitals is presenting both advantages and hurdles in patient care. Presently, 65% of U.S. hospitals have adopted these advanced technologies, which shift the focus towards their operational accuracy and fairness. Despite their potential, a pressing issue has arisen: numerous hospitals are not thoroughly evaluating these models for precision or bias. This oversight raises substantial concerns about the impact on healthcare outcomes. As these AI systems become integral to more hospitals, it is crucial to ensure they are rigorously tested for effectiveness and impartiality. The responsibility lies in proper assessment practices to protect and improve patient care quality. Addressing this challenge is essential to harnessing the full potential of AI while minimizing any unintended negative consequences on patient health outcomes. Ensuring transparency and accountability in AI implementations can lead to safer and more equitable healthcare delivery.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later