Algorithm Appreciation
February 2020: We are compiling summaries of state-of-the-art research in ethics at the frontier of technology, following the theme of our 2019 Susilo Symposium. Today, we review insights on algorithm appreciation from Jennifer Logg (Georgetown University), Julia Minson (Harvard University) and Don Moore (University of California, Berkeley).
Across six studies, the researchers asked participants to forecast the probability of occurrence of different events, such as the ranking of a song in the Billboard Hot 100.
In all studies, participants were asked to make a numerical prediction following one particular event. For example, participants answered “What is the probability of North America imposing sanctions on a country in response to cyber attacks” by typing a percentage from 0 to 100%.
Next, 50% of participants received an advice from “an algorithm” while 50% of the participants received the same advice from a “person”. After receiving the advice, in the form of a prediction from 0 to 100%, participants were incentivized to revise their initial prediction. The closer the predictions were to the actual answer, the greater the financial bonus.
Finally, researchers measured how much participants changed their initial prediction after receiving the advice. The authors found that people changed their prediction to greater extent when it came from an algorithm. Logg and colleagues ruled out that the effect was driven by the age of participants, since age had not influence of the willingness to rely on the algorithm. It turns out that what really mattered was how comfortable participants were with numbers, measured with an 11-question quiz math quiz. Participants who scored higher on numeracy were more likely to rely on algorithms.
In previous blog posts, we presented initial findings on algorithm aversion with opposite findings. What can explain the differences between the results of algorithm aversion and algorithm appreciation? In the article featured in our January post, Castel and colleagues suggest that Logg and colleagues focused on preferences for algorithms versus humans when making decisions for other people, while other research on algorithm aversion focused on preferences for algorithms versus humans when making decisions for oneself. Prior research has shown that making decision for other people is associated with greater psychological distance. Thus, this “other vs. self” difference could explain the divergent findings and constitutes an interesting avenue for future research.
The published academic paper can be found here:
Logg, Jennifer M., Julia A. Minson, and Don A. Moore. “Algorithm appreciation: People prefer algorithmic to human judgment.” Organizational Behavior and Human Decision Processes, 151 (2019): 90-103.
Summary in Harvard Business Review