Clinicians clacking away at workstations in hospitals know what the ones and zeros buzzing in the background are doing, right?
In fact, doctors and health systems often ignore important details about the algorithms they rely on, for example to predict the onset of dangerous medical conditions. But in what advocates call a step forward, federal regulators are now requiring electronic health record (EHR) companies to disclose to customers a wide range of information about artificial intelligence tools in their software.
Since the beginning of January, clinicians should be able to consult a model card or “nutrition label” detailing the variables entering into a prediction, if a tool has been tested in the real world, which the developers of the tool have done to remedy the potential biases, warnings about misuse, and more.
This article is reserved for STAT+ subscribers
Unlock this article – and get additional analysis of the technologies disrupting healthcare – by subscribing to STAT+.
Already have an account? Log in