Human Aversion to Algorithms and Ways to Overcome It

October 2019: We are compiling summaries of state-of-the-art research in ethics at the frontier of technology, following the theme of our 2019 Susilo Symposium. Today, we review insights on algorithm aversion from Berkeley Dietvorst (The University of Chicago, Booth School of Business), Joseph Simmons and Cade Massey (both from University of Pennsylvania, The Wharton School).

Resistance to Algorithm Compared to Human Advice

Organizations want to hire people that are most likely to succeed. Hiring decisions are based on forecasts of a candidates’ future success which rely on the information on their applications. When it comes to universities, for example, traditionally, people in the selection committee review all applications and make forecasts about each one. This is the human method. Universities can also rely on evidence-based algorithms, by using the data of past applicants to build statistical models or decision rules that make predictions about each candidates’ likelihood to succeed. A growing body of research shows that, on average, evidence-based algorithms make more accurate predictions than humans in various domains ranging from clinical diagnosis to employees’ success. Therefore, when choosing between algorithmic and human predictions, it would make sense for organizations to go with algorithms.

However, seeing how an algorithm perform can decrease people’s trust in it according to recent research by Dietvorst, Simmons, and Massey.

According to Dietvorst and his colleagues, their results from online and laboratory experiments revealed that when people saw algorithms make occasional mistakes, they lost confidence more quickly compared to when the same mistakes were made by human forecasters. For example, in one experiment participants were asked to forecast the success of MBA applicants based on eight criteria (undergraduate degree, GMAT scores, essay quality, interview quality, etc.). Participants either saw a human make forecasts, an algorithm make forecasts, both, or neither. After seeing this series of forecasts, participants were shown the actual grades that applicants received, revealing the forecasting mistakes from the algorithm and the human. When exposed to the algorithm forecaster, participants were less confident in it and more likely to bet on humans for better forecasts in the future. This was true even for participants who saw an algorithm outperform a human.

Solutions to overcome “algorithm aversion”

How can one increase employees’ or customers’ trust in and use of algorithms? In a subsequent article, Dietvorst et al. found that people were more likely to choose an algorithm if they could modify the content of its forecasts. In their study, participants were informed about an imperfect algorithm on students’ grades, which was off by 17.5 points (out of 100) on average. Participants were asked to make a series of grading forecasts based on students’ information. In the control condition, participants had to choose between exclusively using their own forecasts (any grade from 0 to 100) or exclusively using the model’s forecasts (if the algorithm’s forecast was 82, participants had to forecast 82). In the “adjust” conditions, participants also had the choice to use exclusively their own forecasts and the algorithm’s forecasts. However, they could adjust the model’s forecasts by 10 points (if the algorithm’s forecast is 82, participants could forecast a grade from 72 to 92), 5 points or 2 points. Results show that people were more likely to use the algorithm when they could adjust the forecast. Interestingly, the participants were insensitive to the amount by which they could adjust the model (10 vs. 5 vs. 2).

Overall, the research suggests that one can reduce algorithm aversion by giving people some control, even if only small amount. Reducing algorithm aversion can increase performance for various companies and industries. Furthermore, “it might help to enhance the social good in domains where increased performance can save lives, like allocating organ donations and operating vehicles.” (Dr. Dietvorst, email correspondence)

The two published academic papers can be found here:

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err, Journal of Experimental Psychology: General, 144(1):114-126.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2016). Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them, Management Science, 64(3):1155-1170.

View all posts