Artificial intelligence and persuasion
March 2020: We are compiling summaries of state-of-the-art research in ethics at the frontier of technology, following the theme of our 2019 Susilo Symposium. Today, we review insights on artificial intelligence and persuasion from Tae Woo Kim (University of Technology Sydney) and Adam Duhachek (University of Illinois at Chicago).
More individuals are exposed to information provided by artificial intelligence and robots, however, little is known about how persuasion attempts made by nonhuman agents might differ from persuasion attempts made by human agents.
The team of researchers hypothesized that the effectiveness of persuasive attempts from artificial agents depends on construal-based differences in the message content. In short, construal level theory aims at understanding to what extent people’s thinking about various events is concrete (low-level construal) or abstract (high-level construal).
Across multiple studies, Kim & Duhachek showed that the messages from artificial agents were judged as more appropriate and effective when the message represents low-construal (concrete) as opposed to high-construal (abstract) features.
In one study, participants were randomly assigned to one of the four websites including persuasive messages about sunscreen use to reduce skin cancer. The website feature either a human doctor or IBM’s artificial intelligence robot Watson. Additionally, the website features either concrete messages (e.g., “How to use sunscreen? Apply sunscreen 30 minutes before going out”) or abstract messages (e.g., “Why use sunscreen? Using sunscreen means healthy skin”). After browsing the website, participants indicated their intentions to use sunscreen, as a measure of persuasiveness.
The authors found that when the website featured an artificial doctor, the concrete arguments were significantly more persuasive that the abstract arguments. However, when the website featured a human doctor, the type of arguments (abstract vs. concrete) did not influence persuasiveness.
While artificial intelligence makes better predictions than humans for specific tasks (e.g., skin cancer detection), people are still less likely to trust artificial intelligence over humans. Therefore, the present research has implications in the design of message content to increase persuasiveness and trust in artificial agents.
The published academic paper can be found here:
Kim, Tae Woo, and Adam Duhachek (2020), Artificial Intelligence and Persuasion: A Construal-Level Account, Psychological Science.