DocumentCode :
2378538
Title :
Design of robust classifiers for adversarial environments
Author :
Biggio, Battista ; Fumera, Giorgio ; Roli, Fabio
Author_Institution :
Dept. of Electr. & Electron. Eng., Univ. of Cagliari, Cagliari, Italy
fYear :
2011
fDate :
9-12 Oct. 2011
Firstpage :
977
Lastpage :
982
Abstract :
In adversarial classification tasks like spam filtering, intrusion detection in computer networks, and biometric identity verification, malicious adversaries can design attacks which exploit vulnerabilities of machine learning algorithms to evade detection, or to force a classification system to generate many false alarms, making it useless. Several works have addressed the problem of designing robust classifiers against these threats, although mainly focusing on specific applications and kinds of attacks. In this work, we propose a model of data distribution for adversarial classification tasks, and exploit it to devise a general method for designing robust classifiers, focusing on generative classifiers. Our method is then evaluated on two case studies concerning biometric identity verification and spam filtering.
Keywords :
biometrics (access control); learning (artificial intelligence); pattern classification; security of data; unsolicited e-mail; adversarial classification tasks; adversarial environments; biometric identity verification; computer networks; data distribution model; generative classifiers; intrusion detection; machine learning algorithms; malicious adversaries; robust classifier design; spam filtering; Biological system modeling; Data models; Electronic mail; Robustness; Testing; Training; Training data; Pattern classification; adversarial classification; robust classifiers;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference on
Conference_Location :
Anchorage, AK
ISSN :
1062-922X
Print_ISBN :
978-1-4577-0652-3
Type :
conf
DOI :
10.1109/ICSMC.2011.6083796
Filename :
6083796
Link To Document :
بازگشت