Author: Ben Fuller

The best paper title I’ve ever had: Thirdeye

So as I get more into biometrics I have to think of cool paper titles.  This is a cool work on iris recognition that uses the triplet loss to train the underlying neural network.  We managed to think of the name ThirdEye which is just about perfect.  Iris recognition is usually four steps:

  1. Segmentation: split pixels into iris/non-iris,
  2. Normalization: split the variable size image into a fixed dimension,
  3. Feature extraction: remove noise and further reduce dimension, and
  4. Comparison: compare the resulting template with a previous version.

Using the triplet loss with modern neural networks we challenge the belief that normalization is helpful (at least in all situations).  Source is here:—Iris-recognition-using-triplets


We were inspired by a similar paper called “Influence of Segmentation on deep iris recognition performance” by Lozej, Stepec, Struc, and Peer that asked if segmentation was useful.

8 Submissions to publication!!! Code Offset in the Exponent

This paper is finally published.  Its my new record for number of needed submissions.  The worst part about it is that it was Luke’s first paper and this unnecessarily stunted his growth.


I think its very cool but I say that about everything I end up writing up!

Abstract: Fuzzy extractors transform a noisy source e into a stable key which can be reproduced from a nearby value e’. They are a fundamental tool for key derivation from biometric sources. This work introduces code offset in the exponent and uses this construction to build the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications—key derivation from biometrics and physical unclonable functions—which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999) that stores the value e in a one-time pad which is sampled as a codeword, Ax, of a linear error-correcting code: Ax+ e. Rather than encoding Ax+ e directly, code offset in the exponent calls for encoding by exponentiation of a generator in a cryptographically strong group. We demonstrate security of the construction in the generic group model, establishing security whenever the inner product between the error distribution and all vectors in the null space of the code is unpredictable. We show this condition includes distributions supported by multiple prior fuzzy extractors. Our analysis also shows a prior construction of pattern matching obfuscation (Bishop et al., Crypto 2018) is secure for more distributions than previously known.

My pretenure path at UConn/tenure packet submitted!

It is been a long time since I’ve posted something here, roughly corresponding with the start of the COVID pandemic.  During that time I didn’t travel as much and it was easy to not think about the value of communicating with the outside.  In Fall 2019, I tore my quad (and broke my kneecap and toe) and this put me in less of a communicative mood.  We also found out in Spring 2020 that my wife was pregnant.  Delivering our first child, Ayla Helen Fuller on September 24, 2020.

We were trying to be first time parents at home while preparing for tenure application.  Honestly, the COVID pandemic was really hard on research productivity.  It was particularly hard for theory students to make progress without in person white board sessions.  It seemed that everyone had much more to do without making much progress.

I had a streak of about 10 straight rejections including a record of 8 submissions for a paper before acceptance.  Coupled with several important grant rejections/cancellations this left the lab in a rough place.  Thanks to some help from UConn we’ve managed to keep supporting students.  It is been a tough two years.

I’m writing because in academia (and all aspects of life) we see others success and tend to see people that are vastly more successful/capable than we are and compare to them.  Everyone sees the professor/Ph.D. student that is publishing multiple times every year at the top conference.  As they say “Comparison is the thief of joy.”

I’m writing this post because that I’ve submitted my tenure packet to UConn and from here on out (assuming I don’t lose my job) any comparison that I do will be purely internal.  I’m writing this post in part to remind myself that all comparisons are my doing and they never make me feel good.  I’m sure there’s some people reading this who view me as a competent member of a community, some who don’t know who am I, and some who aspire to the level of success I have achieved.

All I know is that I love teaching, mentoring students, discovering new results, and communicating science to others.  I don’t know how good I am at any of these things.  I’d like to say that I don’t care but I do deeply.  Maybe that will change someday as external pressures reduce.  But for now I’m looking back through my CV and seen I’ve:

  • Co-authored papers with 56 people,
  • Supervised 8 graduate students and learn a ton from all of them,
  • Supervised 23 undergraduates in research in five years at UConn (including 8 honors thesis), and
  • Built four new courses (Crypto II, Network Security, Computer Security, and our Cybersecurity Lab)

I’m going to try and measure my success in terms of positive impact made on those around me.  I am thrilled to work with wonderful people and I hope I can help them achieve their dreams.  I’m not the best in this field, I’m not the worst (either as a person or a researcher).  I believe I’m helping the world and those around me.

Lab Meeting

Opening the Altschuler Cybersecurity Lab

This semester I’ve been focusing most of my attention on preparing a new teaching laboratory at UConn.  The lab was dedicated today and I figure this is as good a chance as any to talk about it.  The lab was initiated thanks to a wonderful endowment established by Sam and Steve Altschuler who are both UConn Alumni (’50 and ’54).  The endowment is designed to keep us up to date on equipment as cybersecurity threats change.  This lab has two primary differences from my traditional teaching:

  1. There’s no lecture, all entire contact time is students working.
  2. Students learn by working in projects, they are free to search the web for anything, modeling how they actually will work in practice.
  3. The class is designed for general scientists and engineers, not cybersecurity experts.  The goal is to create enough exposure to the issues to make people stop and think when they make a new web app or backend database.  As a side benefit we will get more people interested in cybersecurity as a career.

The topics we are covering this semester are:

  1. Authentication and password cracking
  2. Wired network sniffing, protocol reverse engineering
  3. Wireless hijacking
  4. The USB interface, faking a keyboard and the powers of physical access.
  5. Memory safety and why it matters
  6. The wild west of IoT

The lab we’re currently working on uses a setup of a Raspberry PI and Pineapple tetra as the primary components.  The pineapple is used to create a rogue wireless network and to deauth clients from a legitimate network.  Students are then faking DNS and standing up a malicious website to collect credentials.  This is all achievable in a few hours with a couple hundred dollars of equipment.  I’ve been amazed by the engagement so far.  I keep being surprised by how much work it is to set up a good lab.  I know lecturing is easier, my hope is that brings a unique experience to students.



Possibility of continuous source fuzzy extractors

Lowen and I just received word of acceptance of our recent work to ISIT.  This papers asks whether you can build a universal fuzzy extractor for all high fuzzy min-entropy distributions.  That is, can we have one construction that always just works.  Unfortunately, the answer is negative.  It is possible to artificially construct families of distributions that are impossible to simultaneously secure.  This paper shares a lot of techniques with prior work of myself, Reyzin, and Smith.  Excited to talk about these techniques more with the information theory community!

DOCSDN: Dynamic and Optimal Configuration of Software-Defined Networks

New work on finding good network configurations with Tim, Devon, and Laurent.  This will appear this year at ACISP.

Abstract—Networks are designed with functionality, security, performance, and cost in mind. Tools exist to check or optimize individual properties of a network. These properties may conflict, so it is not always possible to run these tools in series to find a configuration that meets all requirements. This leads to network administrators manually searching for a configuration.

This need not be the case. In this paper, we introduce a layered framework for optimizing network configuration for functional and security requirements. Our framework is able to output configurations that meet reachability, bandwidth, and risk requirements. Each layer of our framework optimizes over a single property. A lower layer can constrain the search problem of a higher layer allowing the framework to converge on a joint solution.

Our approach has the most promise for software-defined networks which can easily reconfigure their logical configuration. Our approach is validated with experiments over the fat tree topology, which is commonly used in data center networks. Search terminates in between 1-5 minutes in experiments. Thus, our solution can propose new configurations for short term events such as defending against a focused network attack.

Iris Segmentation using CNNs

Sohaib (who’s awesome!) just gave his first presentation on performing iris segmentation using fully convolutional neural nets. The paper was published at AMV 2018 which is a workshop at ACCV.

AbstractThe extraction of consistent and identifiable features from an image of the human iris is known as iris recognition. Identifying which pixels belong to the iris, known as segmentation, is the first stage of iris recognition. Errors in segmentation propagate to later stages. Current segmentation approaches are tuned to specific environments. We propose using a convolution neural network for iris segmentation. Our algorithm is accurate when trained in a single environment and tested in multiple environments. Our network builds on the Mask R-CNN framework (He et al., ICCV 2017). Our approach segments faster than previous approaches including the Mask R-CNN network. Our network is accurate when trained on a single environment and tested with a different sensors (either visible light or near-infrared). Its accuracy degrades when trained with a visible light sensor and tested with a near-infrared sensor (and vice versa). A small amount of retraining of the visible light model (using a few samples from a near-infrared dataset) yields a tuned network accurate in both settings. For training and testing, this work uses the Casia v4 Interval, Notre Dame 0405, Ubiris v2, and IITD datasets.

Same Point Composable and Nonmalleable Obfuscated Point Functions

This is a paper I’m very excited about with Peter Fenteany, a great undergrad at UConn.

Abstract: A point obfuscator is an obfuscated program that indicates if a user enters a previously stored password. A digital locker is stronger: outputting a key if a user enters a previously stored password. The real-or-random transform allows one to build a digital locker from a composable point obfuscator (Canetti and Dakdouk, Eurocrypt 2008). Ideally, both objects would be nonmalleable, detecting adversarial tampering. Appending a non-interactive zero knowledge proof of knowledge adds nonmalleability in the common random string (CRS) model. Komargodski and Yogev (Eurocrypt, 2018) built a nonmalleable point obfuscator without a CRS. We show a lemma in their proof is false, leaving security of their construction unclear. Bartusek, Ma, and Zhandry (Crypto, 2019) used similar techniques and introduced another nonmalleable point function; their obfuscator is not secure if the same point is obfuscated twice. Thus, there was no composable and nonmalleable point function to instantiate the real-or-random construction. Our primary contribution is a nonmalleable point obfuscator that can be composed any polynomial number of times with the same point (which must be known ahead of time). Security relies on the assumption used in Bartusek, Ma, and Zhandry. This construction enables a digital locker that is nonmalleable with respect to the input password. As a secondary contribution, we introduce a key encoding step to detect tampering on the key. This step combines nonmalleable codes and seed-dependent condensers. The seed for the condenser must be public and not tampered, so this can be achieved in the CRS model. The password distribution may depend on the condenser’s seed as long as it is efficiently sampleable. This construction is black box in the underlying point obfuscation. Nonmalleability for the password is ensured for functions that can be represented as low degree polynomials. Key nonmalleability is inherited from the class of functions prevented by the nonmalleable code.

FPGA Implementation of a Cryptographically-Secure PUF Based on Learning Parity with Noise

Published in MDPI Cryptography

Joint Work with Chenglu, Charles, Ling, Ha, Srini, and Marten.

Abstract: Herder et al. (IEEE Transactions on Dependable and Secure Computing, 2017) designed a new computational fuzzy extractor and physical unclonable function (PUF) challenge-response protocol based on the Learning Parity with Noise (LPN) problem. The protocol requires no irreversible state updates on the PUFs for security, like burning irreversible fuses, and can correct for significant measurement noise when compared to PUFs using a conventional (information theoretical secure) fuzzy extractor. However, Herder et al. did not implement their protocol. In this paper, we give the first implementation of a challenge response protocol based on computational fuzzy extractors. Our main insight is that “confidence information” does not need to be kept private, if the noise vector is independent of the confidence information, e.g., the bits generated by ring oscillator pairs which are physically placed close to each other. This leads to a construction which is a simplified version of the design of Herder et al. (also building on a ring oscillator PUF). Our simplifications allow for a dramatic reduction in area by making a mild security assumption on ring oscillator physical obfuscated key output bits.

Reusable Authentication from the Iris

I’m super excited to put out my first paper written solely with UConn students.  James and Sailesh have put a ton of work into this.  We build a full key derivation system from the human iris by integrating image processing and the crypto described in our previous paper.  I’m particularly excited because I started working on this problem in graduate school and it felt like we’d never get to an actual implementation.

Abstract: Mobile platforms use biometrics for authentication. Unfortunately, biometrics exhibit noise between repeated readings. Due to the noise, biometrics are stored in plaintext, so device compromise completely reveals the user’s biometric value.

To limit privacy violations, one can use fuzzy extractors to derive a stable cryptographic key from biometrics (Dodis et al., Eurocrypt 2004). Unfortunately, fuzzy extractors have not seen wide deployment due to insufficient security guarantees. Current fuzzy extractors provide no security for real biometric sources and no security if a user enrolls the same biometric with multiple devices or providers.

Previous work claims key derivation systems from the iris but only under weak adversary models. In particular, no known construction securely handles the case of multiple enrollments. Canetti et al. (Eurocrypt 2016) proposed a new fuzzy extractor called sample-then-lock.

We construct biometric key derivation for the iris starting from sample-then-lock. Achieving satisfactory parameters requires modifying and coupling of the image processing and the cryptography. Our construction is implemented in Python and being open-sourced. Our system has the following novel features:

— 45 bits of security. This bound is pessimistic, assuming the adversary can sample strings distributed according to the iris in constant time. Such an algorithm is not known.

— Secure enrollment with multiple services.

— Natural incorporation of a password, enabling multifactor authentication. The structure of the construction allows the overall security to be sum of the security of each factor (increasing security to 79 bits).