Secure ML Demo - Deep Learning security
Secure ML Research Tutorial: Wild Patterns Secure ML Library Web Demo

Secure ML Demo has been partially developed with the support of European Union’s ALOHA project
Horizon 2020 Research and Innovation programme, grant agreement No. 780788.

This web demo allows the user to evaluate the security level of a neural network against worst-case input perturbation [1]. Adding this specifically designed perturbation is used by attackers to create adversarial examples and perform evasion attacks by feeding them to the network causing it to fail the classification [2]. In order to defend a system we first need to evaluate the effectiveness of the attacks. During the security evaluation process, the network is tested against increasing levels of perturbation, and its accuracy is tracked down in order to create a security evaluation curve. This curve, showing the drop in accuracy with respect to the maximum perturbation allowed for the input, can be directly used by the model designer to compare different networks and countermeasures. The user can further generate and inspect adversarial examples and see how the perturbation affects the outputs of the network.
For further details on different attack algorithms and defense methods, we refer the reader to Biggio and Roli [3].
[1] Szegedy et al., Intriguing Properties of Neural Networks, ICLR 2014
[2] Biggio et al., Evasion attacks against ML at test time, ECML PKDD 2013
[3] Biggio and Roli, Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning, Patt. Rec., 2018