Title :
Design of robust classifiers for adversarial environments
Author :
Biggio, Battista ; Fumera, Giorgio ; Roli, Fabio
Author_Institution :
Dept. of Electr. & Electron. Eng., Univ. of Cagliari, Cagliari, Italy
Abstract :
In adversarial classification tasks like spam filtering, intrusion detection in computer networks, and biometric identity verification, malicious adversaries can design attacks which exploit vulnerabilities of machine learning algorithms to evade detection, or to force a classification system to generate many false alarms, making it useless. Several works have addressed the problem of designing robust classifiers against these threats, although mainly focusing on specific applications and kinds of attacks. In this work, we propose a model of data distribution for adversarial classification tasks, and exploit it to devise a general method for designing robust classifiers, focusing on generative classifiers. Our method is then evaluated on two case studies concerning biometric identity verification and spam filtering.
Keywords :
biometrics (access control); learning (artificial intelligence); pattern classification; security of data; unsolicited e-mail; adversarial classification tasks; adversarial environments; biometric identity verification; computer networks; data distribution model; generative classifiers; intrusion detection; machine learning algorithms; malicious adversaries; robust classifier design; spam filtering; Biological system modeling; Data models; Electronic mail; Robustness; Testing; Training; Training data; Pattern classification; adversarial classification; robust classifiers;
Conference_Titel :
Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference on
Conference_Location :
Anchorage, AK
Print_ISBN :
978-1-4577-0652-3
DOI :
10.1109/ICSMC.2011.6083796