Despite a century of courtroom use, fingerprint examiners have little hard evidence of the method's accuracy.
Six and a half years is a long time to spend in prison. For Stephen Cowens of Roxbury, Massachusetts, it was six and a half years too long. In January of 2004 Boston police and prosecutors admitted they were wrong when, at Cowens 1998 trial for murder, they claimed that fingerprints found at the crime scene unquestionably belonged to him. Cowens, 33 at the time of his release, had spent a fifth of his life in prison.
Despite this wrongful conviction—and at least several others—fingerprint examiners maintain that print identifications are infallible. But by the end of 2005 the Massachusetts Supreme Judicial Court (SJC) may come to a much different conclusion. Attorneys for Terry Patterson, accused in the 1993 murder of a Boston police detective, asked the SJC to throw out the fingerprint identifications, the only evidence against Patterson, and to bar all print identifications until the method is subjected to rigorous scientific scrutiny.
Patterson ’s case has yet to be decided, but there have been at least five people wrongfully identified through fingerprints in the last ten years; four of them ended up behind bars. Most famously, Brandon Mayfield , an Oregon lawyer, spent two weeks in jail in 2004 because three FBI experts matched his prints with those found on a plastic bag that was evidence in the investigation of the Madrid train bombings . Spanish authorities continued to try to match the prints after the FBI arrested Mayfield and eventually linked them to an Algerian man. S ince defendants rarely challenge the accuracy of fingerprint evidence, there could be many more undiscovered mistakes. Regardless of the exact number, it is clear that innocent people have been jailed because of fingerprint identifications that were wrong.
Since 1911, when prosecutors first introduced them as evidence, U.S. courts have routinely accepted fingerprints and juries have considered them incontrovertible evidence. But unlike DNA evidence, fingerprinting was adopted before the Supreme Court decided that attorneys and expert witnesses have to prove that evidence is scientific and reliable. “There has been a hundred years of precedence, not a hundred years of data,” says Ralph Haber, a consultant who has studied the reliability of forensic evidence.
Fingerprint identification rests on a simple anatomical feature of primates. The skin of their soles, palms and, most importantly, fingers contains raised ridges and glands that secrete oil to keep the skin supple. Ridge patterns form in utero and do not change throughout a person’s life. When someone touches an object he transfers some of the oil from his fingers, leaving behind the ridge pattern. Experts classify ridges based on their overall pattern, whether they form concentric circles, if they loop horizontally or vertically across the fingertip, etc.
Everyone on the planet may have a unique set of prints, as examiners claim, but the real question is whether experts can accurately link the prints collected by crime scene investigators to the right person. Crime scene prints usually consist of only about 20% of the fingertip and they are often smudged. Examiners link a partial print from a crime scene to a whole one taken from a suspect by matching particular characteristics of the fingerprint. They compare the overall print pattern and other ridge characteristics, including width of the ridges and the spacing of oil pores, according to Ed German, a fingerprint examiner with the U.S. Army. But examiners primarily rely on matching points on both prints where ridges end, bifurcate, or change direction. Examiners conclude a crime scene print came from a suspect after matching between three and sixteen points (FBI examiners found 15 points on Mayfield’s print). But there are no standards on the number of points that must be matched . Instead each lab, and sometimes each examiner, determines the number needed. In Patterson’s case a police examiner found six matching points on one crime scene print, five on a second, and two on a third.
One known flaw in fingerprinting is that examiners may taint the identification process through bias and peer pressure. A panel of outside print examiners convened by the FBI to review the Mayfield case found that a supervisor made the initial identification and lower-ranking examiners, when asked to confirm or reject their boss’ work, felt pressured to confirm. Having FBI supervisors make the initial identifications was not unusual, according to Alan McRoberts, a retired Los Angeles County Sheriff’s Department examiner and chair of the review panel, and other agencies do it as well. But while this practice resulted in one wrongful arrest, the panel did not recommend the FBI review other cases. “As a committee, I don’t think we discussed that in particular,” said McRoberts.
A more fundamental problem is the lack of underlying statistical evidence. The use of genetic evidence provides a good comparison. Scientists and lawyers subjected the technique, developed in 1984 and first introduced into a U.S. court in 1987, to years of scientific scrutiny and almost a decade of court challenges before it became accepted evidence. In DNA analysis, examiners identify and compare short segments of DNA—generally 13—to make a match. In addition to having established procedures for analyzing evidence, experts have calculated the odds that two people could share the same DNA in all 13 segments. These odds vary slightly based on the prevalence of certain DNA patterns among different ethnic groups but are in the tens of millions to one against two people sharing all 13 segments.
Fingerprint examiners frequently tout the permanance and uniqueness of fingerprints, but they do not know the odds that two people could share a given number of fingerprint characteristic. With no clear rules for how much relevant weight to give to the various print characteristics, like point matches, ridge width, and the spacing of oil pores, German argues that it is impossible to attach probabilities to print identifications. Many experts believe probabilities are unnecessary since examiners would not make or confirm an identification unless they were certain of it. But when three of the most experienced FBI examiners confirm a mistake, as they did with Mayfield’s prints, the argument collapses. Other print proponents argue that despite occasional human errors, the method is infallible. Critics like Simon Cole, a legal historian who has testified in many of the court challenges, rightly point out that this is a useless distinction—for whatever reason, fingerprint identifications are sometimes wrong.
The handful of studies of fingerprints show a troubling pattern of errors. Since 1995, Collaborative Testing Services, a company that evaluates the reliability and performance of fingerprint labs, has administered an annual and voluntary test. It sends fingerprint labs a test that includes eight to twelve pairs of prints that examiners confirm or reject as matches. The pairs usually consist of complete, not partial prints, making identifications easier than the real situations examiners face. Nevertheless the error rate has varied from 3% to a dismal 20%.
Equally troubling is a test conducted by the FBI. During Byron Mitchell’s trial for armed robbery in 1999, his lawyers questioned the reliability of fingerprint identifications. In response, the FBI sent two prints taken from the getaway car and Mitchell’s prints to 53 crime labs to confirm the agency’s identification. Of the 39 labs that sent back their results, 9 (23%) concluded that Mitchell’s prints did not match those from the car. The judge nevertheless rejected the defense’s challenge and accepted the fingerprint evidence. Mitchell was convicted and remains in prison. The FBI has not repeated the experiment.
In the last few years, the federal government has considered plans to study the reliability of fingerprints and other forensic evidence and then backed away, perhaps fearing a flood of challenges to previous convictions should the evidence not hold up. In 1998 the Justice Department’s research branch, the National Institute of Justice, requested studies that would validate fingerprint identification as part of their yearly call for research proposals. They received four proposals but rejected them all. The following year they changed their guideline to exclude studies of fingerprinting. In 2003 the Justice Department and the Department of Defense agreed to fund a National Academy of Sciences proposal to study a range of forensic techniques, including fingerprints, but later insisted on complete control of the results. The NAS abandoned their proposal rather than agree to this condition. “The Academy reports its research to the public. We felt we should not have these restrictions,” said Anne-Marie Mazza, director of the Academies’ Science, Technology, and Law program.
The National Institute of Justice has again changed its call for proposals and is now requesting studies of fingerprinting and other forensic techniques. Given the emerging fallibility of fingerprints and the possibility of a ban on fingerprint evidence in Massachusetts, there may be more incentive to follow through with funding instead of again abandoning the subject. The error rate for fingerprint identification may be low, but as Cowens and Mayfield can attest, it is certainly not zero. Neither law enforcement, nor the general public, nor those falsely accused and imprisoned are served by our unwavering faith in this flawed technique.