July 8, 2019: BBC News, AI pilot 'sees' runway and lands automatically
April 24, 2019: New Scientist, Machine mind hack: The new threat that could scupper the AI revolution
March 3, 2019: El Pais, La guerra de los robots: la salvación antes de la era ‘fake’
February 2, 2019: The Register, Fool ML once, shame on you. Fool ML twice, shame on... the AI dev?
January 3, 2019: Bloomberg News, Artificial Intelligence Vs. the Hackers
Read more articles...
- show that machine-learning algorithms are vulnerable to gradient-based adversarial manipulations of the input data, both at test time (evasion attacks) and at training time (poisoning attacks);
- derive a systematic framework for security evaluation of learning algorithms; and
- develop suitable countermeasures for improving their security.
Our research members have been among the first to demonstrate these attacks against well-known machine-learning algorithms, including support vector machines and neural networks (Biggio et al., 2013). Evasion attacks have been independently derived in the area of deep learning and computer vision later (C. Szegedy et al., 2014), under the name of adversarial examples, namely, images that can be misclassified by deep-learning algorithms while being only imperceptibly distorted.
We demonstrated poisoning attacks against support vector machines (Biggio et al., 2012), then against LASSO, ridge and elastic-net regression (Xiao et al., 2015; Jagielski et al., 2018), and more recently against neural networks and deep-learning algorithms (L. Muñoz-González et al., 2017).
- Poisoning Attacks against Support Vector Machines, B. Biggio, B. Nelson, P. Laskov. In ICML 2012
- Evasion Attacks against Machine Learning at Test Time, B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, F. Roli. In ECML-PKDD 2013
- Security Evaluation of Pattern Classifiers under Attack, B. Biggio, G. Fumera, F. Roli. In IEEE TKDE 2014
- Is Feature Selection Secure against Training Data Poisoning?, H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, F. Roli. In ICML
- Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid, M. Melis, A. Demontis, B. Biggio, G. Brown, G. Fumera, F. Roli. In 2017 ICCV Workshop ViPAR
- Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization, L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C. Lupu, F. Roli. In AISec 2017
- Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection, A. Demontis, M. Melis, B. Biggio, D. Maiorca, D. Arp, K. Rieck, I. Corona, G. Giacinto, F. Roli. In IEEE TDSC 2017
- Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning, B. Biggio, F. Roli. In Pattern Recognition (under review)
The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence
Jean Baudrillard, Sociologist