biometric authentication

8 Submissions to publication!!! Code Offset in the Exponent

This paper is finally published.  Its my new record for number of needed submissions.  The worst part about it is that it was Luke’s first paper and this unnecessarily stunted his growth.

 

I think its very cool but I say that about everything I end up writing up!

Abstract: Fuzzy extractors transform a noisy source e into a stable key which can be reproduced from a nearby value e’. They are a fundamental tool for key derivation from biometric sources. This work introduces code offset in the exponent and uses this construction to build the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications—key derivation from biometrics and physical unclonable functions—which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999) that stores the value e in a one-time pad which is sampled as a codeword, Ax, of a linear error-correcting code: Ax+ e. Rather than encoding Ax+ e directly, code offset in the exponent calls for encoding by exponentiation of a generator in a cryptographically strong group. We demonstrate security of the construction in the generic group model, establishing security whenever the inner product between the error distribution and all vectors in the null space of the code is unpredictable. We show this condition includes distributions supported by multiple prior fuzzy extractors. Our analysis also shows a prior construction of pattern matching obfuscation (Bishop et al., Crypto 2018) is secure for more distributions than previously known.

Iris Segmentation using CNNs

Sohaib (who’s awesome!) just gave his first presentation on performing iris segmentation using fully convolutional neural nets. The paper was published at AMV 2018 which is a workshop at ACCV.

AbstractThe extraction of consistent and identifiable features from an image of the human iris is known as iris recognition. Identifying which pixels belong to the iris, known as segmentation, is the first stage of iris recognition. Errors in segmentation propagate to later stages. Current segmentation approaches are tuned to specific environments. We propose using a convolution neural network for iris segmentation. Our algorithm is accurate when trained in a single environment and tested in multiple environments. Our network builds on the Mask R-CNN framework (He et al., ICCV 2017). Our approach segments faster than previous approaches including the Mask R-CNN network. Our network is accurate when trained on a single environment and tested with a different sensors (either visible light or near-infrared). Its accuracy degrades when trained with a visible light sensor and tested with a near-infrared sensor (and vice versa). A small amount of retraining of the visible light model (using a few samples from a near-infrared dataset) yields a tuned network accurate in both settings. For training and testing, this work uses the Casia v4 Interval, Notre Dame 0405, Ubiris v2, and IITD datasets.

Presentation at Asiacrypt 2016

I just presented our paper “When are Fuzzy Extractors Possible?” with Leonid Reyzin and Adam Smith at Asiacrypt 2016.  The talk video is available here: https://youtu.be/eiKqok3pNIs?t=13906 and the slides are here: fuzzy-extractors-when-possible-asiacrypt

When are Fuzzy Extractors Possible?

Benjamin Fuller, Leonid Reyzin, and Adam Smith. When are Fuzzy Extractors Possible? Asiacrypt 2016.

Abstract

Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a high-entropy secret into the same uniformly distributed key. A minimum condition for the security of the key is the hardness of guessing a value that is similar to the secret, because the fuzzy extractor converts such a guess to the key.

We define fuzzy min-entropy to quantify this property of a noisy source of secrets. Fuzzy min-entropy measures the success of the adversary when provided with only the functionality of the fuzzy extractor, that is, the \emph{ideal} security possible from a noisy distribution. High fuzzy min-entropy is necessary for the existence of a fuzzy extractor.

We ask: is high fuzzy min-entropy a sufficient condition for key extraction from noisy sources? If only computational security is required, recent progress on program obfuscation gives evidence that fuzzy min-entropy is indeed sufficient. In contrast, information-theoretic fuzzy extractors are not known for many practically relevant sources of high fuzzy min-entropy.

In this paper, we show that fuzzy min-entropy is also sufficient for information-theoretically secure fuzzy extraction. For every source distribution W for which security is possible we give a secure fuzzy extractor.

Our construction relies on the fuzzy extractor knowing the precise distribution of the source W. A more ambitious goal is to design a single extractor that works for all possible sources. We show that this more ambitious goal is impossible: we give a family of sources with high fuzzy min-entropy for which no single fuzzy extractor is secure. This result emphasizes the importance of accurate models of high entropy sources.