AI bias evaluation efforts are uneven across U.S. hospitals

While about two -thirds of American hospitals use authentic aI’s predictive models, only 44 % of hospitals evaluate these models for bias, raising concerns about property rights to care for patients.
These were the results of a A recent study It was conducted by the College of Public Health at the University of Minnesota and published in health affairs, which analyzed data from 2425 hospitals throughout the country.
The study highlighted the variations in adopting artificial intelligence, noting that hospitals with financial resources and technical expertise are better equipped to develop and evaluate artificial intelligence tools compared to facilities that lack resources.
The report also found that hospitals primarily use artificial intelligence tools to predict health paths for internal patients, identify high -risk outpatients, and scope for format.
Professor Payge Nong, Assistant Public Health School, has explained one of the main questions that currently lead its research, how hospitals that cannot guarantee hospitals that cannot guarantee the artificial intelligence tools adopted are designed with the specific needs of patients.
She said: “We do not want these hospitals to be stuck in bad two options. They use artificial intelligence without the necessary evaluation and supervision, or not to use it even though it can help in some major organizational challenges.”
She said using the information provided in Typical designations predictive described by the Secretary of Technology Policy at the HTI-1 base Can organizations one step can take them.
These stickers provide some important information for hospitals, even if they cannot build models dedicated to their groups from the patient, they may be consumable consumer for the tools available to them.
“Even if access to this information is not easily accessible, they can and should ask their sellers this information,” Nong said.
She admitted that there is a large space for improvement when it comes to biases.
She said: “First, the local evaluation procedure we are discussing in the paper is a valuable step to ensure that artificial intelligence tools are working well for all patients.” Second, looking at the forecasters who lead the output is useful. “
If organizations can see that predictors can be biased – things like income or religious identity, they can avoid these tools.
She added carefully about what it means to take the tool to patients is important.
“If the model predicts missing dates, for example, how can human decisions about this tool be fair and moral, instead of perpetuating bias?” She said.
Nong said she is enthusiastic about an exciting vision to find out how healthcare professionals can bridge the digital gap between well -funded hospitals and the resources they lack when it comes to the ability to adopt artificial intelligence and evaluation.
She said: “On the side of politics, we described various examples of valuable cooperation and partnerships in the paper such as regional extension centers, patient safety organizations in AHRQ, and others.”
I noticed Artificial Intelligence Health Partnership It is one group that tries to do this type of network technical assistance.
“On the side of practice, IT professionals can deal with their societies, professional associations or networks to determine the needs of unrestricted care organizations and provide important visions and support,” Nong said:
Nathan Edi is a health care and independent technology based in Berlin.
Email Author: nathandDy@gmail.com
twitter: @droPded 209