A Parametric Approach to Adversarial Augmentation for Cross-Domain Iris Presentation Attack Detection
Pal Debasmita, Redwan Sony, and Arun Ross
In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2025
Iris-based biometric systems are vulnerable to presentation attacks (PAs), where adversaries present physical artifacts (e.g., printed iris images, textured contact lenses) to defeat the system. This has led to the development of various presentation attack detection (PAD) algorithms, which have demonstrated good performance in intra-domain settings. However, they often struggle to generalize effectively in cross-domain scenarios, where training and testing employ different sensors, PA instruments, and datasets. In this work, we use adversarial training samples of both bonafide irides and PAs to improve the cross-domain performance of a PAD classifier. The novelty lies in the method used to generate these samples. The proposed method leverages the transformation parameters used by classical data augmentation schemes (e.g., translation, rotation, shear). This is accomplished via a convolutional autoencoder, ADV-GEN, that utilizes original training samples along with a set of geometric and photometric transformations to produce adversarial samples. The transformation parameters act as regularization variables to guide ADV-GEN toward generating adversarial samples, which are then used in the training of a PAD classifier. Experiments conducted on the LivDet-Iris 2017 database, comprising four datasets, demonstrate the efficacy of this method in cross-domain scenarios.