Solving for a Crisis in Empirical Legal Studies
Professor Kathy Zeiler advocates for transparency in empirical studies, which can be used by courts to consider the credibility of evidence and legislators to suggest policy changes.

Solving for a Crisis in Empirical Legal Studies
Professor Kathy Zeiler advocates for transparency in empirical studies, which can be used by courts to consider the credibility of evidence and legislators to suggest policy changes.
Kathy Zeiler, BU Law professor and Nancy Barton Scholar, is on a mission to increase transparency in empirical research. A former auditor and tax consultant with Ernst & Young, Professor Zeiler is a noted scholar of economic theory and empirical methods and how they are imported into legal research and used in judicial and legislative decision making.
For years, Zeiler—who has produced large-scale empirical studies of medical malpractice litigation and insurance premiums and coedited the Research Handbook on Behavioral Law and Economics—has been encouraging researchers publishing academic studies to take pains to ensure that their work can be appraised by readers and replicated by other academics in their field. She’s used her platform as president of the American Law & Economics Association (ALEA) and her involvement with the Society of Empirical Legal Studies to advocate for this critical work.
“Empirical legal studies are used not only by other researchers. They are used extensively, in some cases, by policymakers, by the courts, by legislators, regulators, and the executive branch to back up suggested changes in laws, regulations, and legal doctrine,” she says. “It’s especially important that we move toward accessibility and appraisability, so the studies these policymakers are consulting are as reliable as possible.”
Over the past three years, as Zeiler has served on the ALEA executive board, and especially as president in 2022–23 (her term ended in June, culminating with the ALEA Annual Conference, held at BU Law), she has emphasized the need for transparency in empirical work as well as the importance of mentorship within the community, especially for women and people of color.
“ALEA’s annual conference was one of the first places I presented my work, and the association provided me with a community of people that I could go to with questions and collaborate with,” Zeiler says. “I got a lot from the organization as a junior [faculty member], so for me, serving on the board was a way to give back—to try connect the community even more and to try to push the board’s vantage point a little wider.”


Above and below: Professor Zeiler (top left) and guests socialize and discuss their work at the 2023 ALEA Conference.
Right: Professor Louis Kaplow delivers the Ronald H. Coase Lecture.

Identifying the Problem
Zeiler had been working on issues surrounding the use of social sciences data in legal scholarship for close to a decade when she began collaborating with Jason Chin, of Australian National University College of Law, on projects related not only to the importation of published empirical findings into legal scholarship, but with the science itself.
The problem goes back to the early 2010s, when psychology researchers began looking into how a number of articles with seemingly outlandish claims (including one that purported to show evidence of telepathic communication) made it to publication. What they found were widespread publication bias, researcher use of questionable research practices, and reliance on flawed data sets. While some of the dubious studies may have involved outright fraud, Zeiler thinks there’s a simpler explanation: “There is definitely some fraud happening,” she says, “but most non-replicable work likely is due to our curiosity and how our brains work. We get led astray by the data instead of staying faithful to our research plan.”
For example, in a survey, many psychologists collecting data using lab experiments admitted to incrementally increasing their sample sizes until they were able to achieve a result that confirmed their hypotheses. “The statistical tests break down when you do that, so you end up getting spurious results. You get a result that says this effect exists, but it really doesn’t, and others can’t replicate it,” Zeiler says. Other researchers may notice an interesting feature in their data and decide to change course to study that instead. If that feature turns out to be an anomaly—a common occurrence in large data sets—then the results of the study won’t replicate the next time a researcher draws a sample.
Human nature comes into play again when these studies with spurious results get published. “Journal editors like to publish studies that show effects,” Zeiler says. “So, as a researcher, if you’re not producing an effect, you’re less likely to get published.” That publication bias leads researchers to engage in questionable research practices with the aim of producing publishable results, and that can lead to problems with replicability and questions around the reliability of the study.
In speaking about her work with Chin, Zeiler noted that “our collaboration expanded my understanding of problems that I did not know existed,” she says. “And once it was clear to me that other fields were having these issues, it seemed certain that the field of empirical legal studies had them as well.” Their work has resulted in “Replicability in Empirical Legal Research” and several other articles. Addressing these problems has become a core part of Zeiler’s research and a main goal of her term as president of ALEA.
Empirical legal studies are used not only by other researchers. They are used extensively, in some cases, by policymakers, by the courts, by legislators, regulators, and the executive branch to back up suggested changes in laws, regulations, and legal doctrine. It’s especially important that we move toward accessibility and appraisability, so the studies these policymakers are consulting are as reliable as possible.
This lack of replicability in empirical legal studies has consequences far beyond academia. Subject matter experts testifying in state and federal congressional hearings, authors of court briefs and comments to administrative agencies, legislators, and policymakers rely on this research to inform their positions. Trial judges must consider the credibility of expert witnesses and the admissibility of scientific studies presented as evidence. Flawed research practices have contributed to decades of empirical studies with unreliable results, and that puts into question how the courts and legislative and administrative bodies make use of and determine the integrity of that research.
Zeiler and her coauthors suggest that legal scholars who use empirical work to support normative claims be cautious and make sure that readers understand the questions around these studies. If a study is not appraisable because the analysis code or the data aren’t available, or because the researchers haven’t disclosed who funded the work or their conflicts of interest, readers should reduce the amount of weight they place on the reported results.
Finding Practical Solutions
In the longer term, Zeiler has been advocating for journals to change their policies around how authors submit an article for publication. “It’s not enough to say that the data are available upon request,” she says, “because we know from meta-research [a new field of investigation conducted on already published research to determine its reliability] that if you ask an author for it, you only get it about half of the time.” To avoid that, she recommends that journals require authors to publish their data in a repository along with the analysis code and any other materials needed to appraise the study.
To mitigate publication bias, Zeiler recommends a new publication process, called Registered Reports. In this process, instead of submitting a completed article based upon a study that then goes to peer review, an author submits their hypotheses and their plan for collecting and analyzing the data. Editors and peer reviewers can then evaluate the theory and the plan for testing it—without knowing the results—and can determine if the research question is interesting and if the study will move the field forward. If they think the answer is yes, then they guarantee to publish the results, no matter what they are, as long as the author’s actual testing procedures followed the plan. Any results outside the plan are also published but clearly labeled as exploratory results, which require further testing.
Approximately 350 journals are asking researchers to submit these kinds of manuscripts. The American Law & Economics Review, which recently invited Zeiler to join its editorial board, may be among that number soon enough. “The first task I took on [after joining the editorial board] was to update the journal’s policies on requirements around transparency,” she says. “It will soon require the posting of data and code in a repository. And I hope I can take these other ideas that have been the subject of my recent research and put them in to action at the journal.”
There’s an additional challenge for student-run law reviews since the editorial boards rotate. Zeiler has been working with collaborators to advise law students from year-to-year, so they understand the ongoing questions around empirical work and the legal scholarship that makes use of it. All it takes, she says, is one faculty member at a law school to understand the issues and offer an annual workshop or information session to student editors.
That’s why the community she’s building in her work for the Society for Empirical Legal Studies and the American Law & Economics Association is so critical. It may take time to change the hearts, minds, and methods of empirical scholars and the editors who publish their work, but Zeiler and her colleagues are up to the task.
“I get a lot of junior scholars coming up to me and asking how they can be a part of this movement,” she says, “and it’s really heartening. Researchers want to be part of a community that’s doing reliable work.”