Papers

Papers

The best paper title I’ve ever had: Thirdeye

So as I get more into biometrics I have to think of cool paper titles.  This is a cool work on iris recognition that uses the triplet loss to train the underlying neural network.  We managed to think of the name ThirdEye which is just about perfect.  Iris recognition is usually four steps:

  1. Segmentation: split pixels into iris/non-iris,
  2. Normalization: split the variable size image into a fixed dimension,
  3. Feature extraction: remove noise and further reduce dimension, and
  4. Comparison: compare the resulting template with a previous version.

Using the triplet loss with modern neural networks we challenge the belief that normalization is helpful (at least in all situations).  Source is here: https://github.com/sohaib50k/ThirdEye—Iris-recognition-using-triplets

 

We were inspired by a similar paper called “Influence of Segmentation on deep iris recognition performance” by Lozej, Stepec, Struc, and Peer that asked if segmentation was useful.

8 Submissions to publication!!! Code Offset in the Exponent

This paper is finally published.  Its my new record for number of needed submissions.  The worst part about it is that it was Luke’s first paper and this unnecessarily stunted his growth.

 

I think its very cool but I say that about everything I end up writing up!

Abstract: Fuzzy extractors transform a noisy source e into a stable key which can be reproduced from a nearby value e’. They are a fundamental tool for key derivation from biometric sources. This work introduces code offset in the exponent and uses this construction to build the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications—key derivation from biometrics and physical unclonable functions—which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999) that stores the value e in a one-time pad which is sampled as a codeword, Ax, of a linear error-correcting code: Ax+ e. Rather than encoding Ax+ e directly, code offset in the exponent calls for encoding by exponentiation of a generator in a cryptographically strong group. We demonstrate security of the construction in the generic group model, establishing security whenever the inner product between the error distribution and all vectors in the null space of the code is unpredictable. We show this condition includes distributions supported by multiple prior fuzzy extractors. Our analysis also shows a prior construction of pattern matching obfuscation (Bishop et al., Crypto 2018) is secure for more distributions than previously known.

Possibility of continuous source fuzzy extractors

Lowen and I just received word of acceptance of our recent work to ISIT.  This papers asks whether you can build a universal fuzzy extractor for all high fuzzy min-entropy distributions.  That is, can we have one construction that always just works.  Unfortunately, the answer is negative.  It is possible to artificially construct families of distributions that are impossible to simultaneously secure.  This paper shares a lot of techniques with prior work of myself, Reyzin, and Smith.  Excited to talk about these techniques more with the information theory community!

DOCSDN: Dynamic and Optimal Configuration of Software-Defined Networks

New work on finding good network configurations with Tim, Devon, and Laurent.  This will appear this year at ACISP.

Abstract—Networks are designed with functionality, security, performance, and cost in mind. Tools exist to check or optimize individual properties of a network. These properties may conflict, so it is not always possible to run these tools in series to find a configuration that meets all requirements. This leads to network administrators manually searching for a configuration.

This need not be the case. In this paper, we introduce a layered framework for optimizing network configuration for functional and security requirements. Our framework is able to output configurations that meet reachability, bandwidth, and risk requirements. Each layer of our framework optimizes over a single property. A lower layer can constrain the search problem of a higher layer allowing the framework to converge on a joint solution.

Our approach has the most promise for software-defined networks which can easily reconfigure their logical configuration. Our approach is validated with experiments over the fat tree topology, which is commonly used in data center networks. Search terminates in between 1-5 minutes in experiments. Thus, our solution can propose new configurations for short term events such as defending against a focused network attack.

Iris Segmentation using CNNs

Sohaib (who’s awesome!) just gave his first presentation on performing iris segmentation using fully convolutional neural nets. The paper was published at AMV 2018 which is a workshop at ACCV.

AbstractThe extraction of consistent and identifiable features from an image of the human iris is known as iris recognition. Identifying which pixels belong to the iris, known as segmentation, is the first stage of iris recognition. Errors in segmentation propagate to later stages. Current segmentation approaches are tuned to specific environments. We propose using a convolution neural network for iris segmentation. Our algorithm is accurate when trained in a single environment and tested in multiple environments. Our network builds on the Mask R-CNN framework (He et al., ICCV 2017). Our approach segments faster than previous approaches including the Mask R-CNN network. Our network is accurate when trained on a single environment and tested with a different sensors (either visible light or near-infrared). Its accuracy degrades when trained with a visible light sensor and tested with a near-infrared sensor (and vice versa). A small amount of retraining of the visible light model (using a few samples from a near-infrared dataset) yields a tuned network accurate in both settings. For training and testing, this work uses the Casia v4 Interval, Notre Dame 0405, Ubiris v2, and IITD datasets.

Same Point Composable and Nonmalleable Obfuscated Point Functions

This is a paper I’m very excited about with Peter Fenteany, a great undergrad at UConn.

Abstract: A point obfuscator is an obfuscated program that indicates if a user enters a previously stored password. A digital locker is stronger: outputting a key if a user enters a previously stored password. The real-or-random transform allows one to build a digital locker from a composable point obfuscator (Canetti and Dakdouk, Eurocrypt 2008). Ideally, both objects would be nonmalleable, detecting adversarial tampering. Appending a non-interactive zero knowledge proof of knowledge adds nonmalleability in the common random string (CRS) model. Komargodski and Yogev (Eurocrypt, 2018) built a nonmalleable point obfuscator without a CRS. We show a lemma in their proof is false, leaving security of their construction unclear. Bartusek, Ma, and Zhandry (Crypto, 2019) used similar techniques and introduced another nonmalleable point function; their obfuscator is not secure if the same point is obfuscated twice. Thus, there was no composable and nonmalleable point function to instantiate the real-or-random construction. Our primary contribution is a nonmalleable point obfuscator that can be composed any polynomial number of times with the same point (which must be known ahead of time). Security relies on the assumption used in Bartusek, Ma, and Zhandry. This construction enables a digital locker that is nonmalleable with respect to the input password. As a secondary contribution, we introduce a key encoding step to detect tampering on the key. This step combines nonmalleable codes and seed-dependent condensers. The seed for the condenser must be public and not tampered, so this can be achieved in the CRS model. The password distribution may depend on the condenser’s seed as long as it is efficiently sampleable. This construction is black box in the underlying point obfuscation. Nonmalleability for the password is ensured for functions that can be represented as low degree polynomials. Key nonmalleability is inherited from the class of functions prevented by the nonmalleable code.

FPGA Implementation of a Cryptographically-Secure PUF Based on Learning Parity with Noise

Published in MDPI Cryptography

Joint Work with Chenglu, Charles, Ling, Ha, Srini, and Marten.

Abstract: Herder et al. (IEEE Transactions on Dependable and Secure Computing, 2017) designed a new computational fuzzy extractor and physical unclonable function (PUF) challenge-response protocol based on the Learning Parity with Noise (LPN) problem. The protocol requires no irreversible state updates on the PUFs for security, like burning irreversible fuses, and can correct for significant measurement noise when compared to PUFs using a conventional (information theoretical secure) fuzzy extractor. However, Herder et al. did not implement their protocol. In this paper, we give the first implementation of a challenge response protocol based on computational fuzzy extractors. Our main insight is that “confidence information” does not need to be kept private, if the noise vector is independent of the confidence information, e.g., the bits generated by ring oscillator pairs which are physically placed close to each other. This leads to a construction which is a simplified version of the design of Herder et al. (also building on a ring oscillator PUF). Our simplifications allow for a dramatic reduction in area by making a mild security assumption on ring oscillator physical obfuscated key output bits.

Reusable Authentication from the Iris

I’m super excited to put out my first paper written solely with UConn students.  James and Sailesh have put a ton of work into this.  We build a full key derivation system from the human iris by integrating image processing and the crypto described in our previous paper.  I’m particularly excited because I started working on this problem in graduate school and it felt like we’d never get to an actual implementation.

Abstract: Mobile platforms use biometrics for authentication. Unfortunately, biometrics exhibit noise between repeated readings. Due to the noise, biometrics are stored in plaintext, so device compromise completely reveals the user’s biometric value.

To limit privacy violations, one can use fuzzy extractors to derive a stable cryptographic key from biometrics (Dodis et al., Eurocrypt 2004). Unfortunately, fuzzy extractors have not seen wide deployment due to insufficient security guarantees. Current fuzzy extractors provide no security for real biometric sources and no security if a user enrolls the same biometric with multiple devices or providers.

Previous work claims key derivation systems from the iris but only under weak adversary models. In particular, no known construction securely handles the case of multiple enrollments. Canetti et al. (Eurocrypt 2016) proposed a new fuzzy extractor called sample-then-lock.

We construct biometric key derivation for the iris starting from sample-then-lock. Achieving satisfactory parameters requires modifying and coupling of the image processing and the cryptography. Our construction is implemented in Python and being open-sourced. Our system has the following novel features:

— 45 bits of security. This bound is pessimistic, assuming the adversary can sample strings distributed according to the iris in constant time. Such an algorithm is not known.

— Secure enrollment with multiple services.

— Natural incorporation of a password, enabling multifactor authentication. The structure of the construction allows the overall security to be sum of the security of each factor (increasing security to 79 bits).

Environmentally Keyed Malware

Our paper (with Jeremy Blackthorne, Ben Kaiser, and Bulent Yener) on how malware authenticates was  published at Latincrypt 2017.  Abstract below:

Abstract: Malware needs to execute on a target machine while simultaneously keeping its payload confidential from a malware analyst. Standard encryption can be used to ensure the confidentiality, but it does not address the problem of hiding the key. Any analyst can find the decryption key if it is stored in the malware or derived in plain view.

One approach is to derive the key from a part of the environment which changes when the analyst is present. Such malware derives a key from the environment and encrypts its true functionality under this key.

In this paper, we present a formal framework for environmental authentication. We formalize the interaction between malware and analyst in three settings: 1) blind: in which the analyst does not have access to the target environment, 2) basic: where the analyst can load a single analysis toolkit on an effected target, and 3) resettable: where the analyst can create multiple copies of an infected environment. We show necessary and sufficient conditions for malware security in the blind and basic games and show that even under mild conditions, the analyst can always win in the resettable scenario.

Catching MPC Cheaters

Our paper (with Rob Cunningham and Sophia Yakoubov) on augmenting security of MPC will be published at ICITS 2017.  Abstract below:

Abstract: Secure multi-party computation (MPC) protocols do not completely prevent malicious parties from cheating and disrupting the computation. A coalition of malicious parties can repeatedly cause the computation to abort or provide an input that does not correspond to reality. We augment MPC with three new properties to discourage cheating. First is a strengthening of identifiable abort, called completely identifiable abort, where all parties who do not follow the protocol will be identified as cheaters by each honest party, even if there are more cheaters than honest parties. The second is completely identifiable auditability, which means that a third party can, given the computation output and its additive share decomposition, determine whether the computation was performed correctly (and who cheated if it was not) by looking at a transcript of the computation.  The third is openability, which means that a distinguished coalition of parties can recover the MPC inputs in the extreme setting when the parties are discovered to have lied about their inputs (e.g. by a real-world event contradicting the output of the MPC).

We construct the first (efficient) MPC protocol achieving these properties.  Our scheme is built on the SPDZ protocol (Damgard et al., Crypto 2012), which leverages an offline (computation-independent) pre-processing phase to speed up the online computation. Our protocol is optimistic: it has the same communication and computation complexity in the online phase as SPDZ when no parties cheat. If cheating does occur, each honest party performs only local computation to identify cheaters.

Our main technical tool is a new locally identifiable secret sharing scheme (as defined by Ishai, Ostrovsky, and Zikas (TCC 2012)) which we call commitment enhanced secret sharing, or CESS. Each CESS share contains an additive share (as in SPDZ), and additionally includes linearly homomorphic commitments to every additive share. Each CESS share also contains the decommitment value for the commitment to the corresponding additive share. These commitments enable local identification during reconstruction; by the binding property of the commitment scheme, no party should be able to convince any other party of the validity of an altered additive share.  CESS enables MPC with completely identifiable abort; all parties whose claimed output shares do not match their output share commitments are identified as cheaters.

The work of Baum, Damgard, and Orlandi (SCN 2014) introduces the concept of auditability, which allows a third party to verify that the computation was executed correctly, but not to identify the cheaters if it was not.  We enable the third party to identify the cheaters by augmenting the scheme of Baum, Damg{\aa}rd, and Orlandi with CESS. We add openability through the use of verifiable encryption and specialized zero-knowledge proofs.