On TV dramas like CSI: Crime Scene Investigation, the tiniest shreds of DNA are like magic keys, unlocking the identities of criminals with the speed of a supercomputer and the authority of science. In reality, DNA forensics isn’t nearly so exact, especially when the genetic material at a crime scene comes from more than one person. Analyzing these DNA mixtures isn’t about achieving certainty. It’s about partial matches, probabilities, big-time math, and a healthy dose of judgment calls by forensic scientists.“There are no national guidelines or standards saying that labs have to meet some critical threshold of a match statistic,” to conclude that a suspect might have been at a crime scene, says Catherine Grgicak (pronounced Ger-gi-chuk), Boston University assistant professor of anatomy & neurobiology. Neither are there guidelines about when a DNA mixture is simply too complicated to analyze in the first place. Often, labs aren’t even certain how many people contributed to the jumble of DNA detected on a weapon or the victim’s clothing. Plus, the evidence may contain very little genetic material from some or all the contributors, and may include DNA degraded by heat and light.

Given the weight of DNA evidence in court, this uncertainty concerns many trial attorneys, forensic scientists, and federal authorities who hope additional training focused on handling DNA mixtures along with number-crunching software will bring more reliability to the interpretation of complex DNA evidence.

“It’s a problem,” says Sheree Hughes-Stamm, a forensic science professor at Sam Houston State University’s College of Criminal Justice. “It’s a problem of reliability with the interpretation of the results, rather than the science,” that yields those results, she adds. “Human interpretation is going to differ, and you risk misinterpreting the profile.”

Grgicak is among the forensic researchers trying to reduce this risk. Backed by $2.5 million in government funding from the US Department of Justice and Department of Defense, she and her team want to help crime labs unwind this genetic evidence to help identify the guilty without entangling the innocent.

The first piece of software, called NOCIt (NOC=number of contributors), uses statistical analysis to estimate the number of people whose DNA is part of the evidence—assigning a probability from one to five contributors. The second software, called MATCHit, compares the DNA mixtures to the DNA from a suspect to compute a match statistic, known as a “likelihood ratio,” that this person contributed to the genetic mixture from the crime scene. Grgicak’s team’s goal is to combine both NOCIt and MATCHit into a single tool for forensic labs by 2017.

Extract genetic material from sweat on shirt collar
Was the shirt borrowed?
DNA from at least three people
Can we isolate this?
Check for skin cells
DNA degraded by heat?
Testable DNA tricky to get from hairs on couch
How many people picked up that magazine?
Is this evidence related to the crime?
Who was in the apartment last week?

To understand how DNA evidence can go wrong, it’s helpful to start with what DNA fingerprinting actually entails. Forensic labs don’t compare entire genomes. They examine tiny chunks of them, looking for commonalities at about 16 specific locations (the exact number varies depending on the kit used by the lab). At each location, there might be a few dozen possible genetic variations in the general population, and every person has two of them—one inherited from mom and one from dad. So, imagine that in DNA from a crime scene, each genetic location is a box containing Scrabble letter tiles representing variations. If each of these boxes contains just two letters, then the forensic scientist can assume the DNA is from just one person. They can compare that DNA fingerprint to the DNA from a suspect, knowing that it’s almost impossible for two random people to have perfect matches at every location (except for identical twins).

But what if some of the boxes from our crime scene DNA contained not two letter tiles but six, while others contained five, and a few contained seven? In this case, when the DNA is clearly from more than one person, forensic labs can no longer determine a match between the evidence and a suspect’s DNA, but can only compute a likelihood ratio.

Typically, there are three basic conclusions a lab can make from DNA mixtures, depending on how many genetic variations a suspect’s DNA and the crime scene evidence have in common:

  • The suspect’s DNA doesn’t show up in the crime scene evidence.
  • The suspect might have been at the crime scene based on commonalities between his DNA and the mixture.
  • The evidence is too complicated to analyze.

The odds of two people having a few genetic variations in common with DNA fingerprinting are pretty good. Imagine mixing Scrabble tiles for every letter of one person’s first and last name in a hat. It’s not hard to pull them out and match them to a single name. But add the tiles for two or three other people, and the number of names you can potentially spell skyrockets. So, with a DNA mixture, it’s entirely possible to create false links between crime scene evidence and an innocent person who was nowhere near the crime.

“Mixture analysis is a murky part of DNA forensics,” says Greg Hampikian, a forensic biologist at Boise State University in Idaho. A few years ago, Hampikian was contacted by the lawyer of Kerry Robinson, who is serving 20 years in prison after being convicted in 2002 for raping a woman in Moultrie, a small town in southern Georgia. The woman, who was raped by three men, identified only one of her attackers, a man named Tyrone White. DNA analysis found that White’s genetic variations appeared in 11 of the 13 locations tested, which court records called, “essentially a conclusive match.” As part of a plea bargain to reduce his sentence, White named Robinson as having also raped the woman. Robinson’s genetic variations matched the evidence mix in just two locations. As one forensic expert testified, up to 1,000 people among the 15,000 in Moultrie County could likely match the crime scene evidence to the same degree. Still, combined with White’s testimony, the prosecution was able to use the DNA evidence to convince a jury to convict Robinson.

Hampikian and another forensic researcher put the DNA evidence that imprisoned Robinson to the test. They sent the data extracted from the trial evidence (a cheek swab from Robinson and DNA from the crime scene) to 17 accredited crime labs. Only one agreed with the lab used by prosecutors, which found that Robinson’s DNA shared some common genetic markers with the crime scene evidence, meaning that he “could not be excluded” as a suspect. Four labs said they couldn’t conclude anything from the evidence. Twelve labs reported that Robinson should be excluded as a suspect. Importantly, these labs didn’t find differing numbers of shared genetic variations in the evidence. They just interpreted the strength of that evidence differently, and it’s the interpretation that matters in court.

In an email, Kerry Robinson’s lawyer, Rodney Zell, says that he filed a habeas petition, a claim of wrongful imprisonment, in the court where Robinson was convicted. The petition was rejected and he is preparing an appeal to the Georgia Supreme Court.

“Errors in DNA forensics can be multiplied in the justice system,” says Hampikian. Often, DNA is used to corroborate otherwise flimsy evidence. Robinson, for instance, claimed that the convicted man named him because he suspected that Robinson had turned him in to the police. Just two of Robinson’s genetic markers were also found in the evidence, but because he had no corroborated alibi, that was enough for the lab to say he might have been at the crime scene.

Because of DNA’s vaunted reputation, Hampikian says, “suddenly, all this weak evidence gets propped up by science.”

Unlike the fuzzy memories and questionable motives of witnesses, DNA evidence seems objective and unassailable to many judges and juries. Only DNA was spared in a 2009 report by the National Academy of Sciences that took all of forensic science to task for “serious problems” stemming from, “an absence of adequate training and continuing education, rigorous mandatory certification and accreditation programs, adherence to robust performance standards, and effective oversight.” Indeed, the report noted, DNA evidence had repeatedly exonerated people who were wrongly convicted by “faulty forensic science.”

How could this gold standard of forensic evidence become so tarnished? Basically, our ability to detect DNA from a crime scene has outstripped our ability to make sense of it. When DNA forensic science began in the 1980s, the tests didn’t work well unless investigators were able to gather a lot of DNA from one person, and so they were rarely used in court. Since then, Grgicak says, the tests have become more than 100 times more sensitive, prompting investigators to swab more of the crime scene for genetic material—well beyond the bloody knife, to things like skin cells left on a computer keyboard or a doorknob.

Low copy DNA – weak genetic signal
DNA from at least three people
Can we isolate the suspect’s DNA?
How do we make sense of all this?
Perfect DNA match to suspect unlikely
Will this be enough DNA to identify a suspect?
Is it possible to get a clean DNA signal?
Signal could be from someone else
Testable DNA tricky to get from hairs on floor
How many people have been through here?

“We have very sensitive techniques that give us these more complicated mixtures,” explains Robin Cotton, BU associate professor and director of biomedical forensic sciences. “We need to be able to analyze this evidence. Otherwise, you just throw your hands up in the air and give up, which doesn’t do anybody any good.”

During an interview in her office, Grgicak prints out two graphs showing analyzed DNA evidence from two mock crime scenes (i.e., the DNA is from real blood, but the blood is not from a crime). On the graphs, the variations at each genetic location (our hypothetical Scrabble tiles) show up as little spikes. In the evidence from a single DNA source, two spikes of nearly equal height poke up at distinct points for each of the sixteen locations.

Two random strangers could easily share one or two of these spikes, but the probability of more than one person’s DNA matching every spike is vanishingly small. However, the chance of a false identification grows substantially when the genetic evidence is from multiple people, as it is in the second piece of mock evidence. This graph shows a DNA mixture from five people, and each of the sixteen locations has from four to seven spikes of varying heights. Because people often share a few genetic variations, it’s possible that some of the spikes represent DNA from more than one person. Plus, several low spikes suggest that at least one person contributed only a trace of genetic material to this evidence—possibly so little that his genetic markers at other locations weren’t even detected by the test.

Grgicak points at one location with seven spikes. Maybe the first two spikes are from the same person, or maybe it’s the first and the third, or the second and the fourth.

“It becomes a game of combinations,” she says, which multiply quickly, especially when looking for a few shared genetic markers. Pretty soon, lots of innocent people could appear to be linked to the crime scene.

The first step to making sense of a DNA mixture, Grgicak explains, is to figure out how many people contributed to it. That number is the basis for nearly every other conclusion about the evidence. The old way to estimate it is to count the maximum number of spikes at any genetic location, divide by two, and round up. There were up to seven spikes in Grgicak’s mock evidence DNA mixture, so a forensic scientist using the old formula would conclude that at least four people contributed to it.

“It’s one thing to report that the minimum number of contributors is four, but it’s another thing to use that number in the calculation of a match statistic,” says Grgicak. Recall that there were actually five contributors.

Low copy DNA – weak genetic signal
Was this jewelry borrowed?
DNA from at least three people
Can we isolate this?
Eyeglasses may provide clean DNA signal, check nose piece for skin cells
Likely handled by others beyond crime scene
Swab necklace for touch DNA
Only a partial DNA fingerprint
DNA degraded by heat / light?
Who was in the apartment last week?

So, Grgicak and collaborators at Rutgers University and the Massachusetts Institute of Technology spent years developing NOCIt—computational algorithms that could sort through all the possible combinations of DNA spikes in a piece of evidence, taking into account their prevalence in the general population, to determine the likelihood that the genetic material came from one, two, three, four, or five people.

In testing using mock evidence, NOCIt might conclude that one mixture is 99.9 percent likely to have two contributors, for instance. Or it might estimate a 35 percent likelihood of three contributors and a 65 percent likelihood of four contributors. In these studies, Grgicak’s team designates any probability over one percent as a possible answer to the number of DNA contributors.

In September 2014, the Department of Defense awarded Grgicak’s lab a $1.7 million contract to turn their NOCIt prototype into something ready to be adopted by forensic labs nationwide.

The ultimate goal, of course, is to increase the certainty that a suspect’s DNA is or isn’t part of the crime scene evidence. To that end, in January 2015, the Department of Justice awarded Grgicak and her collaborators $800,000 to develop MATCHit. The prototype of MATCHit is a bare-bones computer software program asking for the numbers that the algorithm will crunch, including the number of contributors, and how common every DNA variation is in the general population, according to a database such as the one compiled by the National Institute of Standards and Technology. In addition to generating a match statistic between the suspect and the crime scene evidence, the program also yields a common statistical measure called a “p value” to indicate how likely it is that a random person’s DNA would have a match statistic as strong (or stronger) than the suspect’s. The range of p values goes from zero to one. The closer it gets to zero, the more robust the match statistic becomes.

As with NOCIt, the question with MATCHit is: where does a forensic lab draw the line in interpreting these probabilities? So far, Grgicak’s research shows that the match statistics of non-contributors (i.e., innocent people) never have a p value below .01, no matter how complex the crime scene mixture. They have tested MATCHit using DNA mixtures of one, two, and three people (their goal is five), and so far, it’s performed well.

“We know, at least from our own early tests of MATCHit, that we have not falsely included individuals using that threshold,” says Grgicak, “and that’s the most important thing.”

Mistakes in criminal forensics can have grave consequences. Innocent people might get sent to prison. The guilty might escape justice. And once those mistakes lead to judgment in a court of law, they’re hard to rectify.

In addition to avoiding tragic mistakes, Cotton says that tools like NOCIt and MATCHit will help forensic scientists with their ultimate goal—providing information to help the criminal justice system find the truth. “It’s not just helping the prosecution or the defense. If you can find an answer, that’s helpful.”

Published

2 comments

Post Your Comment