Dodging Attack Using Carefully Crafted Natural Makeup

Enfin ce n'est qu'une question d'entrainement de modèle. Pour le moment ça marche …

Deep learning face recognition models are used by state-ofthe-art surveillance systems to identify individuals passing
through public areas (e.g., airports). Previous studies have
demonstrated the use of adversarial machine learning (AML)
attacks to successfully evade identification by such systems,
both in the digital and physical domains. Attacks in the physical domain, however, require significant manipulation to the
human participant’s face, which can raise suspicion by human observers (e.g. airport security officers). In this study, we
present a novel black-box AML attack which carefully crafts
natural makeup, which, when applied on a human participant,
prevents the participant from being identified by facial recognition models. We evaluated our proposed attack against the
ArcFace face recognition model, with 20 participants in a
real-world setup that includes two cameras, different shooting angles, and different lighting conditions

via Chrome : lire l’article source

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.