reusability

Public Key Cryptography with Noisy Private Keys

New Paper: Public Key Cryptography with Noisy Private Keys

Abstract: Passwords bootstrap symmetric and asymmetric cryptography, tying keys to an individual user. Biometrics are intended to strengthen this tie. Unfortunately, biometrics exhibit noise between repeated readings. Fuzzy extractors (Dodis et al., Eurocrypt 2004) derive stable symmetric keys from noisy sources.

We ask if it is also possible for noisy sources to directly replace private keys in asymmetric cryptosystems. We propose a new primitive called public-key cryptosystems with noisy keys. Such a cryptosystem functions when the private key varies according to some metric. An intuitive solution is to combine a fuzzy extractor with a public key cryptosystem. Unfortunately, fuzzy extractors need static helper information to account for noise. This helper information creates fundamental limitations on the resulting cryptosytems.

To overcome these limitations, we directly construct public-key encryption and digital signature algorithms with noisy keys. The core of our constructions is a computational version of the fuzzy vault (Juels and Sudan, Designs, Codes, and Cryptography 2006). Security of our schemes is based on graded encoding schemes (Garg et al., Eurocrypt 2013, Garg et al., TCC 2016). Importantly, our public-key encryption algorithm is based on a weaker model of grading encoding. If functional encryption or indistinguishable obfuscation exist in this weaker model, they also exist in the standard model.

In addition, we use the computational fuzzy vault to construct the first reusable fuzzy extractor (Boyen, CCS 2004) supporting a linear fraction of errors.

Joint work with Charles Herder, Marten van Dijk, and Srinivas Devadas

Pseudoentropic Isometries

I was excited to join the paper Pseudoentropic Isometries: A New framework for fuzzy extractor reusability by Quentin Alamélou, Paul-Edmond Berthier, Chloe Cachet, Stéphane Cauchie, Benjamin Fuller, Philippe Gaborit, and Sailesh Simhadri.  This paper describes how to use the random oracle to build a reusable fuzzy extractor that corrects a linear fraction of errors.  Presented at AsiaCCS 2018.  The abstract is below.

Abstract:

Fuzzy extractors (Dodis et al., Eurocrypt 2004) turn a noisy secret into a stable, uniformly distributed key. Reusable fuzzy extractors remain secure when multiple keys are produced from a single noisy secret (Boyen, CCS 2004). Boyen proved that any information-theoretically secure reusable fuzzy extractor is subject to strong limitations. Simoens et al. (IEEE S&P, 2009) then showed deployed constructions suffer severe security breaks when reused. Canetti et al. (Eurocrypt 2016) proposed using computational security to sidestep this problem. They constructed a computationally secure reusable fuzzy extractor for the Hamming metric that corrects a sublinear fraction of errors.

We introduce a generic approach to constructing reusable fuzzy extractors. We define a new primitive called a reusable pseudoentropic isometry that projects an input metric space to an output metric space. This projection preserves distance and entropy even if the same input is mapped to multiple output metric spaces. A reusable pseudoentropy isometry yields a reusable fuzzy extractor by 1) randomizing the noisy secret using the isometry and 2) applying a traditional fuzzy extractor to derive a secret key.

We propose reusable pseudoentropic isometries for the set difference and Hamming metrics. The set difference construction is built from composable digital lockers (Canetti and Dakdouk, Eurocrypt 2008) yielding the first reusable fuzzy extractor that corrects a linear fraction of errors. For the Hamming metric, we show that the second construction of Canetti et al. (Eurocrypt 2016) can be seen as an instantiation of our framework. In both cases, the pseudoentropic isometry’s reusability requires noisy secrets distributions to have entropy in each symbol of the alphabet.

Lastly, we implement our set difference solution and describe two use cases.

Reusable Fuzzy Extractors for Low-Entropy Distributions

Ran Canetti, Benjamin Fuller, Omer Paneth, Leonid Reyzin, and Adam Smith.  Reusable Fuzzy Extractors for Low-Entropy Distributions. Eurocrypt 2016.

Previous titles were “Reusable Fuzzy Extractors via Digital Lockers” and “Key Derivation From Noisy Sources With More Errors Than Entropy.”

Abstract

Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors (Boyen, CCS 2004) remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings (for example, when a single person’s biometric is enrolled with multiple unrelated organizations).

We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated (the only prior construction assumed a very specific, unrealistic class of correlations). The extractor works for binary strings with Hamming noise; it achieves computational security under assumptions on the security of hash functions or in the random oracle model. It is simple and efficient and tolerates near-linear error rates.

Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates–lower than those supported by prior (nonreusable) constructions–assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. We show that such structural assumptions are necessary to support low entropy rates.

We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, providing a computationally secure and an information-theoretically secure construction for large-alphabet sources.