Understanding and utilizing algorithmic-based healthcare services
August 2020: We are compiling summaries of state-of-the-art research in ethics at the frontier of technology, following the theme of our 2019 Susilo Symposium. Today, we review insights on understanding and utilizing medical Artificial Intelligence by Romain Cadario (Erasmus University), Chiara Longoni & Carey K. Morewedge (Boston University).
Medical Artificial intelligence (AI) such as algorithmic-based healthcare services are rapidly growing with various applications ranging from detection of skin cancer, emergency department triage, to diagnoses of COVID-19 from chest X-rays. Medical artificial intelligence is cost-effective, scalable, and often outperforms human providers. Unfortunately, patients exhibit resistance to medical AI.
In this research, Cadario, Longoni and Morewedge show that the perception that medical AI is a “black box” creates a barrier to its utilization. People erroneously believe they understand medical decisions made by human providers, and that only decisions made by algorithmic providers are a blackbox. In fact, the authors show that both are a blackbox, and this illusion of explanatory depth impairs the utilization of algorithmic relative to human providers. Fortunately, the authors show that interventions can increase subjective understanding of algorithmic decision processes, which increases willingness to utilize algorithmic healthcare providers at no expense to the utilization of human providers.
In one experiment, participants were assigned to either a human provider or an algorithmic provider condition in the context of skin cancer detection. Participants read that “A dermatologist [algorithm] will examine the scans of your skin to identify cancerous skin lesions,” and then completed a measure of subjective understanding (“To what extent do you understand how a dermatologist [algorithm] examines the scans of your skin to identify cancerous skin lesions?”). To test the degree to which this rating was illusory, participants then generated an open-ended explanation of the provider’s decision process. After completing the explanation, participants provided a second rating of their subjective understanding of a dermatologist/algorithmic dermatologist’s decision process. Comparing the results from pre to post-explanation, the authors found a significant decrease in subjective understanding for human providers, but not for algorithmic providers. In other words, people were more aware of their limited understanding of medical decisions made by algorithmic than human providers.
Next, Cadario and colleagues tested an intervention to increase subjective understanding and utilization of algorithmic healthcare providers. Participants were assigned to one of four experimental conditions (control & human provider, control & algorithmic provider, intervention & human provider, intervention & algorithmic provider). Using the same context of skin cancer detection, participants in the intervention condition first rated their subjective understanding, and then read supplementary information that described how doctors [algorithms] diagnose skin cancer based on photographs of moles. The information was presented in a single diagram using the ABCD (Asymmetry, Border, Color, Diameter) framework. Compared to a control condition (without supplementary information), the intervention lead to an increase in subjective understanding of algorithmic providers, which in turn increased utilization intentions of algorithmic providers.
These results suggest that new regulations proposed to explain decisions made by algorithms could encourage patient utilization of medical algorithmic providers, without penalizing utilization of human provided healthcare.
The working paper can be found here:
Cadario, Romain and Longoni, Chiara and Morewedge, Carey, Understanding and Utilizing Medical Artificial Intelligence (August 17, 2020).