How a cryptographic definition of “knowledge” can help us understand the Fifth Amendment and compelled decryption

By Sarah Scheffler.

 

The best thing about interdisciplinary work is getting to unite concepts from very different areas of study that both fields have studied extensively.

As far as I know, law doesn’t have a single formal working definition of “knowledge,” but it does draw heavily from philosophy, which has a whole subfield devoted to working out definitions of knowledge.

On the other hand, cryptography absolutely has consensus on a formal definition of “knowledge.”  Agreement on the definition of “knowledge” was born of a practical need — designers of cryptographic systems need to know what they are trying to build, and users of those systems need to know what they’re getting.  The concept of “knowledge” in cryptography is that you “know” something if you are capable of outputting it.  You “know” the answer to an exam question if you are capable of consistently outputting the answer.  You “know” your phone number if you can say it.  Clearly this leaves some philosophical room to be desired — I might be able to output a drawing, but I wouldn’t say I “know” the drawing — but as a practical matter, the definition captures most useful scenarios.  Among other uses, this notion of “knowledge” lets cryptographers define how much more information someone would “know” about a message after seeing its encryption — hopefully, nothing.

These different definitions can be helpful in ways you would never expect.  In a recent work, we turned our attention to a legal question: Can a court legally order the owner of an encrypted device to decrypt it and provide its contents?  This practice is called “compelled decryption.”  Aside from the obvious encryption connection, you wouldn’t expect cryptographic definitions to be helpful to answer this question, or any other legal question.  But adding cryptographic tools to the legal analysis toolbox turns out to be surprisingly helpful.

The compelled decryption question is rooted in the Fifth Amendment of the U.S. Constitution, which states (in part) that “no person … shall be compelled in any criminal case to be a witness against himself.”  This strong protection is meant to defend people from the “cruel trilemma” of being forced to choose between lying, self-incrimination, or facing contempt of court by staying silent.  However, over the years, certain limitations have been interpreted — for example, the Fifth Amendment only applies to testimony the government forces or coerces you to say, not testimony you confessed willingly.

One of these exceptions has to do with the government’s “knowledge” of testimony that comes from an action.  For example, if you’re subpoenaed to bring something to court — let’s say a plane ticket — then the action of bringing that plane ticket to court itself testifies to the fact that the plane ticket exists, and you have it, and that the ticket you brought is the real ticket.  The rule since 1976 for this “implicit” testimony has been: The government can compel you to produce the plane tickets if — and only if — it already “knows” all that implicit testimony.  The government doesn’t need to know anything about the contents of the plane ticket itself, only the “meta” information that is communicated by your act of producing the ticket.

This is where a formal definition of knowledge becomes useful.  How can we say the government “knows” the ticket exists?  Here, we find it useful to do the same kind of thought experiment that we might have done in cryptography.  The government knows the ticket exists if it can produce the ticket.  In fact, we could take this further and say the government knows the ticket exists if it is *capable* of producing the ticket — it doesn’t actually have to do so.  If the government knows, say via your friend’s testimony, that your plane ticket is in your desk drawer, then it “knows” you have it because it could go get it from your desk drawer if it so chose.

It turns out this concept actually works pretty well to describe prior cases that involve this kind of “implicit testimony.”  We reviewed every Supreme Court case involving this “implicit testimony” and a couple dozen Circuit Court cases, just to be sure.  All of the court cases that involve non-cryptographic subpoenas like producing paper documents align with this cryptographic “knowledge” approach.

However, when it comes to cases that involve cryptography, like being forced to disclose the contents of an encrypted computer, the courts are much more confused — there are decisions all over the map.  Some say nothing at all can be compelled.  Others say everything — even the password itself — can be compelled directly.  Some say the government only has to show that you know the password, and once they do that, you must provide the decrypted contents (though not the password itself).  Biometrics like fingerprint logins or FaceID complicate the question even further — physical attributes like fingerprints are generally not protected under the Fifth Amendment, but at least one court has argued that use of a fingerprint to log in should be treated differently than other uses of fingerprints.

The problem is genuinely confusing.  When decrypting your device, what is the implicit testimony?  If we’re asking for the decryption of a specific known document, then we can use the same approach as before: the government must know the document’s existence and so on.  If we’re looking for an underlying set of documents that may or may not exist, it seems more complicated.

Our cryptographic “knowledge” method gives us a way out!  The government “knows” the testimony inherent in the action if it could produce the result itself!  The existence of an unencrypted backup of the files is certainly sufficient.  Witness testimony about a particular file on the drive would probably be sufficient to produce that file.  But the government can’t compel testimonial information it didn’t know in advance.  It shouldn’t be able to compel production of documents it doesn’t know exist, and it definitely shouldn’t be allowed to compel passwords themselves.

This finding is not absolute — after all, we just used our own cryptographic definition of “knowledge.”  Other methods probably lead to different results.  But it’s nice to find a method that is consistent with prior cases, is grounded in theory, and allows reasoning about cryptography directly — a task which has beguiled courts to date.  Part of the excitement of interdisciplinary work is that very occasionally, you find a marriage between concepts that not only works, it shines light on genuinely difficult questions.

For more technical details, see our paper at https://eprint.iacr.org/2020/862.

 

Sarah Scheffler is a PhD student in the BUSec group working with Prof. Mayank Varia.  She studies applied cryptography, including zero-knowledge proofs, multi-party computation, secure messaging, private set intersection, and hash combiners. Her  research creates new cryptographic capabilities inspired by the needs of society, law, and policy.  Visit her personal website sarahscheffler.net.

View all posts

Post Your Comment