Papers

Papers

SoK: Cryptographically Protected Database Search

Excited to announce that our paper on protected database search will be appear at 2017 IEEE Security and Privacy.

Joint work with Mayank Varia, Arkady Yerukhimovich, Emily Shen, Ariel Hamlin, Vijay Gadepally, Richard Shay, John, Darby Mitchell, and Robert K. Cunningham

Abstract:

Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly; systems are offered by academia, start-ups, and established companies.

However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases.

At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions:

1) An identification of the important primitive operations across database paradigms. We find there are a small number of \emph{base} operations that can be used and combined to support a large number of database paradigms.

2) An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality.

3) An analysis of attacks against protected search for different base queries.

4) A roadmap and tools for transforming a protected search system into a protected database, including an open-source performance evaluation platform and initial user opinions of protected search.

Public Key Cryptography with Noisy Private Keys

New Paper: Public Key Cryptography with Noisy Private Keys

Abstract: Passwords bootstrap symmetric and asymmetric cryptography, tying keys to an individual user. Biometrics are intended to strengthen this tie. Unfortunately, biometrics exhibit noise between repeated readings. Fuzzy extractors (Dodis et al., Eurocrypt 2004) derive stable symmetric keys from noisy sources.

We ask if it is also possible for noisy sources to directly replace private keys in asymmetric cryptosystems. We propose a new primitive called public-key cryptosystems with noisy keys. Such a cryptosystem functions when the private key varies according to some metric. An intuitive solution is to combine a fuzzy extractor with a public key cryptosystem. Unfortunately, fuzzy extractors need static helper information to account for noise. This helper information creates fundamental limitations on the resulting cryptosytems.

To overcome these limitations, we directly construct public-key encryption and digital signature algorithms with noisy keys. The core of our constructions is a computational version of the fuzzy vault (Juels and Sudan, Designs, Codes, and Cryptography 2006). Security of our schemes is based on graded encoding schemes (Garg et al., Eurocrypt 2013, Garg et al., TCC 2016). Importantly, our public-key encryption algorithm is based on a weaker model of grading encoding. If functional encryption or indistinguishable obfuscation exist in this weaker model, they also exist in the standard model.

In addition, we use the computational fuzzy vault to construct the first reusable fuzzy extractor (Boyen, CCS 2004) supporting a linear fraction of errors.

Joint work with Charles Herder, Marten van Dijk, and Srinivas Devadas

Pseudoentropic Isometries

I was excited to join the paper Pseudoentropic Isometries: A New framework for fuzzy extractor reusability by Quentin Alamélou, Paul-Edmond Berthier, Chloe Cachet, Stéphane Cauchie, Benjamin Fuller, Philippe Gaborit, and Sailesh Simhadri.  This paper describes how to use the random oracle to build a reusable fuzzy extractor that corrects a linear fraction of errors.  Presented at AsiaCCS 2018.  The abstract is below.

Abstract:

Fuzzy extractors (Dodis et al., Eurocrypt 2004) turn a noisy secret into a stable, uniformly distributed key. Reusable fuzzy extractors remain secure when multiple keys are produced from a single noisy secret (Boyen, CCS 2004). Boyen proved that any information-theoretically secure reusable fuzzy extractor is subject to strong limitations. Simoens et al. (IEEE S&P, 2009) then showed deployed constructions suffer severe security breaks when reused. Canetti et al. (Eurocrypt 2016) proposed using computational security to sidestep this problem. They constructed a computationally secure reusable fuzzy extractor for the Hamming metric that corrects a sublinear fraction of errors.

We introduce a generic approach to constructing reusable fuzzy extractors. We define a new primitive called a reusable pseudoentropic isometry that projects an input metric space to an output metric space. This projection preserves distance and entropy even if the same input is mapped to multiple output metric spaces. A reusable pseudoentropy isometry yields a reusable fuzzy extractor by 1) randomizing the noisy secret using the isometry and 2) applying a traditional fuzzy extractor to derive a secret key.

We propose reusable pseudoentropic isometries for the set difference and Hamming metrics. The set difference construction is built from composable digital lockers (Canetti and Dakdouk, Eurocrypt 2008) yielding the first reusable fuzzy extractor that corrects a linear fraction of errors. For the Hamming metric, we show that the second construction of Canetti et al. (Eurocrypt 2016) can be seen as an instantiation of our framework. In both cases, the pseudoentropic isometry’s reusability requires noisy secrets distributions to have entropy in each symbol of the alphabet.

Lastly, we implement our set difference solution and describe two use cases.

When are Fuzzy Extractors Possible?

Benjamin Fuller, Leonid Reyzin, and Adam Smith. When are Fuzzy Extractors Possible? Asiacrypt 2016.

Abstract

Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a high-entropy secret into the same uniformly distributed key. A minimum condition for the security of the key is the hardness of guessing a value that is similar to the secret, because the fuzzy extractor converts such a guess to the key.

We define fuzzy min-entropy to quantify this property of a noisy source of secrets. Fuzzy min-entropy measures the success of the adversary when provided with only the functionality of the fuzzy extractor, that is, the \emph{ideal} security possible from a noisy distribution. High fuzzy min-entropy is necessary for the existence of a fuzzy extractor.

We ask: is high fuzzy min-entropy a sufficient condition for key extraction from noisy sources? If only computational security is required, recent progress on program obfuscation gives evidence that fuzzy min-entropy is indeed sufficient. In contrast, information-theoretic fuzzy extractors are not known for many practically relevant sources of high fuzzy min-entropy.

In this paper, we show that fuzzy min-entropy is also sufficient for information-theoretically secure fuzzy extraction. For every source distribution W for which security is possible we give a secure fuzzy extractor.

Our construction relies on the fuzzy extractor knowing the precise distribution of the source W. A more ambitious goal is to design a single extractor that works for all possible sources. We show that this more ambitious goal is impossible: we give a family of sources with high fuzzy min-entropy for which no single fuzzy extractor is secure. This result emphasizes the importance of accurate models of high entropy sources.

Catching MPC Cheaters: Identification and Openability

Robert Cunningham, Benjamin Fuller, and Sophia Yakoubov. Catching MPC Cheaters: Identification and Openability. 2016.

Abstract

Secure multi-party computation (MPC) protocols do not completely prevent malicious parties from cheating and disrupting the computation. A coalition of malicious parties can repeatedly cause the computation to abort or provide an input that does not correspond to reality. In this work, we augment MPC with two new properties to discourage cheating. The first of these is a strengthening of identifiable abort where all parties who do not follow the protocol will be identified as cheaters by each honest party. The second is openability, which means that if a computation output is discovered to be untrue (e.g. by a real-world event contradicting it), a distinguished coalition of parties can recover the MPC inputs. We provide the first efficient MPC protocol achieving both of those properties. Our scheme extends the SPDZ protocol (Damgard et al., Crypto 2012). SPDZ leverages an offline (computation- independent) pre-processing phase to speed up the online computation. Our protocol is optimistic: it has the same communication and computation complexity in the online phase as SPDZ when no parties cheat. If cheating does occur, each honest party can additionally perform a local computation to identify all cheaters. We achieve identifiable abort by using a new locally identifiable secret sharing scheme (as defined by Ishai, Ostrovsky, and Zikas (TCC 2012)) which we call commitment enhanced secret sharing, or CESS. In CESS, each SPDZ input share is augmented with a linearly homomorphic commitment. When cheating occurs, each party can use the linear homomorphism to compute a commitment to the corresponding share of the output value. Parties whose claimed output share does not match their output share commitments are identified as cheaters. We achieve openability through the use of verifiable encryption and specialized zero-knowledge proofs. Openability relies on the availability of an auditable public transcript of the MPC execution, as introduced by Baum, Damgard, and Orlandi (SCN 2014).

Reusable Fuzzy Extractors for Low-Entropy Distributions

Ran Canetti, Benjamin Fuller, Omer Paneth, Leonid Reyzin, and Adam Smith.  Reusable Fuzzy Extractors for Low-Entropy Distributions. Eurocrypt 2016.

Previous titles were “Reusable Fuzzy Extractors via Digital Lockers” and “Key Derivation From Noisy Sources With More Errors Than Entropy.”

Abstract

Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors (Boyen, CCS 2004) remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings (for example, when a single person’s biometric is enrolled with multiple unrelated organizations).

We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated (the only prior construction assumed a very specific, unrealistic class of correlations). The extractor works for binary strings with Hamming noise; it achieves computational security under assumptions on the security of hash functions or in the random oracle model. It is simple and efficient and tolerates near-linear error rates.

Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates–lower than those supported by prior (nonreusable) constructions–assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. We show that such structural assumptions are necessary to support low entropy rates.

We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, providing a computationally secure and an information-theoretically secure construction for large-alphabet sources.

Iris Biometric Security Challenges and Possible Solutions

Gene Itkis, Venkat Chandar, Benjamin Fuller, Joseph Campbell, Robert Cunningham. Iris Biometric Security Challenges and Possible Solutions: For your eyes only? Using the iris as a key. IEEE Signal Processing Magazine, 2015.

Abstract

Biometrics were originally developed for identification, such as for criminal investigations. More recently, biometrics have been also utilized for authentication. Most biometric authentication systems today match a user?s biometric reading against a stored reference template generated during enrollment. If the reading and the template are sufficiently close, the authentication is considered successful and the user is authorized to access protected resources. This binary matching approach has major inherent vulnerabilities.

Unifying Leakage Classes

Benjamin Fuller and Ariel Hamlin. Unifying Leakage Classes: Simulatable Leakage and Pseudoentropy. ICITS 2015.

Abstract

Leakage-resilient cryptography builds systems that withstand partial adversary knowledge of secret state. Ideally, leakage-resilient systems withstand current and future attacks; restoring confidence in the security of implemented cryptographic systems. Understanding the relation between classes of leakage functions is an important aspect.

In this work, we consider the memory leakage model, where the leakage class contains functions over the system’s entire secret state. Standard classes include functions with bounded output length, functions that retain (pseudo)~entropy in the secret, and functions that leave the secret computationally unpredictable.

Standaert, Pereira, and Yu (Crypto, 2013) introduced a new class of leakage functions they call simulatable leakage. A leakage function is simulatable if a simulator can produce indistinguishable leakage without access to the true secret state. We extend their notion to general applications and consider two versions. For weak simulatability: the simulated leakage must be indistinguishable from the true leakage in the presence of public information. For strong simulatability, this requirement must also hold when the distinguisher has access to the true secret state. We show the following: * Weakly simulatable functions retain computational unpredictability. * Strongly simulatability functions retain pseudoentropy. * There are bounded length functions that are not weakly simulatable. * There are weakly simulatable functions that remove pseudoentropy. * There are leakage functions that retain computational unpredictability are not weakly simulatable.

Strong Key Derivation from Noisy Sources

My Ph.D. has been completed and defended.  It is titled Strong Key Derivation from Noisy Sources.  It is largely drawn from three papers Computational Fuzzy ExtractorsReusable Fuzzy Extractors for Low-Entropy Distributions, and When are Fuzzy Extractors Possible?  Those papers are more up-to-date than my Ph.D.  My Ph.D does contain a cohesive summary of the results that can be useful as an introduction.