Gao, Yuan, Lee, Dokyun, Burtch, Gordon & Fazelpour, Sina (2025). Take caution in using LLMs as human surrogates
September 18, 2025

“This paper critically evaluates the potential dangers of employing large language models (LLMs) as surrogates for human participants or as simulations of human behavior in social science research. Through an in-depth empirical case study, we find that LLMs do not exhibit behavior consistent with humans in a simple scenario. Further, LLMs demonstrate inconsistent and idiosyncratic responses. We explore failure modes, analyze their limitations from empirical and philosophical perspectives, and propose practical guidelines for future research. Our study underscores the importance of transparency and rigor to ensure replicable and reliable research in this emerging area.” (PNAS). Read the publication, Take caution in using LLMs as human surrogates, here.
This publication was written by Yuan Gao, Dokyun Lee, Gordon Burtch, & Sina Fazelpour.