Artificial intelligence helps UC Davis Health Predict which patients may need immediate care and possibly prevent them from being hospitalized.
The predictive model of the AI of the health of the population created by a multidisciplinary team of experts is called to be-Fair (framework for the reduction of biases and actions to assess, implement and overhaul). Its algorithm has been scheduled to identify patients who can benefit from care management services to deal with health problems before leading to emergency or hospital services.
The team has described their approach and its creation of the Be-Fair model in a article published in the Journal of General Internal Medicine. The document explains how Be-Fair can advance equity in health and explains how other health systems can develop their own predictive model of personalized AI for more effective care for patients.
“Population health programs are based on predictive models of AI to determine which patients most need rare resources, but many generic AI models can neglect groups in patients of patients exacerbating health disparities among these communities,” said Reshma GuptaHead of population health and responsible care for UC Davis Health. “We have decided to create a personalized AI predictive model that could be assessed, followed, improved and implemented to open the way for more inclusive and effective health strategies.”
“We have decided to create a personalized AI predictive model that could be assessed, followed, improved and implemented to open the way for more inclusive and effective health strategies.”–Reshma Gupta
Creation of the Be-Fair model
To create the Be-Fair model on a system scale, UC Davis Health brought together a team of population health experts, information technologies and health system actions.
Over a period of two years, the team created a new step executive which provided the care managers of the planned probability of future potential hospitalizations or visits to the emergency service for individual patients.
Patients greater than a centile risk threshold have been identified and, with advice from primary care clinician, determined if they could benefit from registration in the program. If necessary, the personnel proactively contacted patients, provided needs assessments and started the predefined care management workflows.
Responsible use of AI
After a period of 12 months, the team estimated the performance of the model. They found the predictive model underestimated the probability of hospitalizations and visits to emergency services for African-American and Hispanic groups. The team identified the centile threshold ideal to reduce this sub-prediction by assessing the calibration of the predictive model.
“As health care providers, we are responsible for guaranteeing that our practices are the most effective and help as many patients as possible,” said Gupta. “By analyzing our model and by making small adjustments to improve our data collection, we were able to implement more effective health strategies.”
Studies have shown that the systematic assessment of AI models by health systems is necessary to determine the value of the patients of patients they serve.
“The Be-Fair frame guarantees that equity is integrated into each stage to prevent predictive models from strengthening health disparities.”–Hendry tone
“AI models should not only help us effectively use our resources – they can also help us be fairer,” added Hendry toneVice-Chancellor associated for health equity, diversity and inclusion. “The Be-Fair frame guarantees that equity is integrated into each stage to prevent predictive models from strengthening health disparities.”
Framework
The use of AI systems has been adopted by health care organizations in the United States to optimize patient care.
About 65% of hospitals use predictive AI models created by electronic disc software developers or third -party suppliers, according to data from 2023 American Hospital Association Annual Survey Information Technology Supplement.
“It is well known that the AI models work as well as the data you put there – if you take a model that has not been built for your specific patient population, some people will miss.”–Jason Adams
“It is well known that AI models work as well as the data you put there – if you take a model that has not been built for your specific patient population, some people will miss,” said Jason AdamsData director and analysis strategy. “Unfortunately, all health systems do not have staff to create their own predictive model on the health of the personalized population, so we have created a framework that health leaders can use to browse and develop theirs.”
The nine stages frame of the Be-Fair model is described here.