fuzzy extractors

8 Submissions to publication!!! Code Offset in the Exponent

This paper is finally published.  Its my new record for number of needed submissions.  The worst part about it is that it was Luke’s first paper and this unnecessarily stunted his growth.

 

I think its very cool but I say that about everything I end up writing up!

Abstract: Fuzzy extractors transform a noisy source e into a stable key which can be reproduced from a nearby value e’. They are a fundamental tool for key derivation from biometric sources. This work introduces code offset in the exponent and uses this construction to build the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications—key derivation from biometrics and physical unclonable functions—which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999) that stores the value e in a one-time pad which is sampled as a codeword, Ax, of a linear error-correcting code: Ax+ e. Rather than encoding Ax+ e directly, code offset in the exponent calls for encoding by exponentiation of a generator in a cryptographically strong group. We demonstrate security of the construction in the generic group model, establishing security whenever the inner product between the error distribution and all vectors in the null space of the code is unpredictable. We show this condition includes distributions supported by multiple prior fuzzy extractors. Our analysis also shows a prior construction of pattern matching obfuscation (Bishop et al., Crypto 2018) is secure for more distributions than previously known.

Possibility of continuous source fuzzy extractors

Lowen and I just received word of acceptance of our recent work to ISIT.  This papers asks whether you can build a universal fuzzy extractor for all high fuzzy min-entropy distributions.  That is, can we have one construction that always just works.  Unfortunately, the answer is negative.  It is possible to artificially construct families of distributions that are impossible to simultaneously secure.  This paper shares a lot of techniques with prior work of myself, Reyzin, and Smith.  Excited to talk about these techniques more with the information theory community!

Reusable Authentication from the Iris

I’m super excited to put out my first paper written solely with UConn students.  James and Sailesh have put a ton of work into this.  We build a full key derivation system from the human iris by integrating image processing and the crypto described in our previous paper.  I’m particularly excited because I started working on this problem in graduate school and it felt like we’d never get to an actual implementation.

Abstract: Mobile platforms use biometrics for authentication. Unfortunately, biometrics exhibit noise between repeated readings. Due to the noise, biometrics are stored in plaintext, so device compromise completely reveals the user’s biometric value.

To limit privacy violations, one can use fuzzy extractors to derive a stable cryptographic key from biometrics (Dodis et al., Eurocrypt 2004). Unfortunately, fuzzy extractors have not seen wide deployment due to insufficient security guarantees. Current fuzzy extractors provide no security for real biometric sources and no security if a user enrolls the same biometric with multiple devices or providers.

Previous work claims key derivation systems from the iris but only under weak adversary models. In particular, no known construction securely handles the case of multiple enrollments. Canetti et al. (Eurocrypt 2016) proposed a new fuzzy extractor called sample-then-lock.

We construct biometric key derivation for the iris starting from sample-then-lock. Achieving satisfactory parameters requires modifying and coupling of the image processing and the cryptography. Our construction is implemented in Python and being open-sourced. Our system has the following novel features:

— 45 bits of security. This bound is pessimistic, assuming the adversary can sample strings distributed according to the iris in constant time. Such an algorithm is not known.

— Secure enrollment with multiple services.

— Natural incorporation of a password, enabling multifactor authentication. The structure of the construction allows the overall security to be sum of the security of each factor (increasing security to 79 bits).

Environmentally Keyed Malware

Our paper (with Jeremy Blackthorne, Ben Kaiser, and Bulent Yener) on how malware authenticates was  published at Latincrypt 2017.  Abstract below:

Abstract: Malware needs to execute on a target machine while simultaneously keeping its payload confidential from a malware analyst. Standard encryption can be used to ensure the confidentiality, but it does not address the problem of hiding the key. Any analyst can find the decryption key if it is stored in the malware or derived in plain view.

One approach is to derive the key from a part of the environment which changes when the analyst is present. Such malware derives a key from the environment and encrypts its true functionality under this key.

In this paper, we present a formal framework for environmental authentication. We formalize the interaction between malware and analyst in three settings: 1) blind: in which the analyst does not have access to the target environment, 2) basic: where the analyst can load a single analysis toolkit on an effected target, and 3) resettable: where the analyst can create multiple copies of an infected environment. We show necessary and sufficient conditions for malware security in the blind and basic games and show that even under mild conditions, the analyst can always win in the resettable scenario.

Public Key Cryptography with Noisy Private Keys

New Paper: Public Key Cryptography with Noisy Private Keys

Abstract: Passwords bootstrap symmetric and asymmetric cryptography, tying keys to an individual user. Biometrics are intended to strengthen this tie. Unfortunately, biometrics exhibit noise between repeated readings. Fuzzy extractors (Dodis et al., Eurocrypt 2004) derive stable symmetric keys from noisy sources.

We ask if it is also possible for noisy sources to directly replace private keys in asymmetric cryptosystems. We propose a new primitive called public-key cryptosystems with noisy keys. Such a cryptosystem functions when the private key varies according to some metric. An intuitive solution is to combine a fuzzy extractor with a public key cryptosystem. Unfortunately, fuzzy extractors need static helper information to account for noise. This helper information creates fundamental limitations on the resulting cryptosytems.

To overcome these limitations, we directly construct public-key encryption and digital signature algorithms with noisy keys. The core of our constructions is a computational version of the fuzzy vault (Juels and Sudan, Designs, Codes, and Cryptography 2006). Security of our schemes is based on graded encoding schemes (Garg et al., Eurocrypt 2013, Garg et al., TCC 2016). Importantly, our public-key encryption algorithm is based on a weaker model of grading encoding. If functional encryption or indistinguishable obfuscation exist in this weaker model, they also exist in the standard model.

In addition, we use the computational fuzzy vault to construct the first reusable fuzzy extractor (Boyen, CCS 2004) supporting a linear fraction of errors.

Joint work with Charles Herder, Marten van Dijk, and Srinivas Devadas

Pseudoentropic Isometries

I was excited to join the paper Pseudoentropic Isometries: A New framework for fuzzy extractor reusability by Quentin Alamélou, Paul-Edmond Berthier, Chloe Cachet, Stéphane Cauchie, Benjamin Fuller, Philippe Gaborit, and Sailesh Simhadri.  This paper describes how to use the random oracle to build a reusable fuzzy extractor that corrects a linear fraction of errors.  Presented at AsiaCCS 2018.  The abstract is below.

Abstract:

Fuzzy extractors (Dodis et al., Eurocrypt 2004) turn a noisy secret into a stable, uniformly distributed key. Reusable fuzzy extractors remain secure when multiple keys are produced from a single noisy secret (Boyen, CCS 2004). Boyen proved that any information-theoretically secure reusable fuzzy extractor is subject to strong limitations. Simoens et al. (IEEE S&P, 2009) then showed deployed constructions suffer severe security breaks when reused. Canetti et al. (Eurocrypt 2016) proposed using computational security to sidestep this problem. They constructed a computationally secure reusable fuzzy extractor for the Hamming metric that corrects a sublinear fraction of errors.

We introduce a generic approach to constructing reusable fuzzy extractors. We define a new primitive called a reusable pseudoentropic isometry that projects an input metric space to an output metric space. This projection preserves distance and entropy even if the same input is mapped to multiple output metric spaces. A reusable pseudoentropy isometry yields a reusable fuzzy extractor by 1) randomizing the noisy secret using the isometry and 2) applying a traditional fuzzy extractor to derive a secret key.

We propose reusable pseudoentropic isometries for the set difference and Hamming metrics. The set difference construction is built from composable digital lockers (Canetti and Dakdouk, Eurocrypt 2008) yielding the first reusable fuzzy extractor that corrects a linear fraction of errors. For the Hamming metric, we show that the second construction of Canetti et al. (Eurocrypt 2016) can be seen as an instantiation of our framework. In both cases, the pseudoentropic isometry’s reusability requires noisy secrets distributions to have entropy in each symbol of the alphabet.

Lastly, we implement our set difference solution and describe two use cases.

Presentation at Asiacrypt 2016

I just presented our paper “When are Fuzzy Extractors Possible?” with Leonid Reyzin and Adam Smith at Asiacrypt 2016.  The talk video is available here: https://youtu.be/eiKqok3pNIs?t=13906 and the slides are here: fuzzy-extractors-when-possible-asiacrypt

When are Fuzzy Extractors Possible?

Benjamin Fuller, Leonid Reyzin, and Adam Smith. When are Fuzzy Extractors Possible? Asiacrypt 2016.

Abstract

Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a high-entropy secret into the same uniformly distributed key. A minimum condition for the security of the key is the hardness of guessing a value that is similar to the secret, because the fuzzy extractor converts such a guess to the key.

We define fuzzy min-entropy to quantify this property of a noisy source of secrets. Fuzzy min-entropy measures the success of the adversary when provided with only the functionality of the fuzzy extractor, that is, the \emph{ideal} security possible from a noisy distribution. High fuzzy min-entropy is necessary for the existence of a fuzzy extractor.

We ask: is high fuzzy min-entropy a sufficient condition for key extraction from noisy sources? If only computational security is required, recent progress on program obfuscation gives evidence that fuzzy min-entropy is indeed sufficient. In contrast, information-theoretic fuzzy extractors are not known for many practically relevant sources of high fuzzy min-entropy.

In this paper, we show that fuzzy min-entropy is also sufficient for information-theoretically secure fuzzy extraction. For every source distribution W for which security is possible we give a secure fuzzy extractor.

Our construction relies on the fuzzy extractor knowing the precise distribution of the source W. A more ambitious goal is to design a single extractor that works for all possible sources. We show that this more ambitious goal is impossible: we give a family of sources with high fuzzy min-entropy for which no single fuzzy extractor is secure. This result emphasizes the importance of accurate models of high entropy sources.

Reusable Fuzzy Extractors for Low-Entropy Distributions

Ran Canetti, Benjamin Fuller, Omer Paneth, Leonid Reyzin, and Adam Smith.  Reusable Fuzzy Extractors for Low-Entropy Distributions. Eurocrypt 2016.

Previous titles were “Reusable Fuzzy Extractors via Digital Lockers” and “Key Derivation From Noisy Sources With More Errors Than Entropy.”

Abstract

Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors (Boyen, CCS 2004) remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings (for example, when a single person’s biometric is enrolled with multiple unrelated organizations).

We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated (the only prior construction assumed a very specific, unrealistic class of correlations). The extractor works for binary strings with Hamming noise; it achieves computational security under assumptions on the security of hash functions or in the random oracle model. It is simple and efficient and tolerates near-linear error rates.

Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates–lower than those supported by prior (nonreusable) constructions–assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. We show that such structural assumptions are necessary to support low entropy rates.

We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, providing a computationally secure and an information-theoretically secure construction for large-alphabet sources.