Ghostmaxxing!

Experiments in adversarial disguise to deceive facial recognition

The implementation of facial recognition in public and private spaces represents today one of the most insidious and pervasive threats to our civil liberties. This technology, imposed from above and deployed without any real democratic consent or transparency, transforms our bodies and our features into extractable commodities, feeding a mass surveillance infrastructure that normalises institutional control throughout Europe. We have tried to resist through institutional channels — and having seen their limits, we are beginning to experiment with self-defence practices…

Activist campaigns such as Reclaim Your Face1 have been denouncing this techno-authoritarian drift for years, reaffirming the urgency of organising a radical, grassroots reaction against mass biometrics2.

The approach: resistance through make-up and adversarial methods

Faced with the arrogance of algorithmic surveillance, digital self-defence evolves from the streets to our very faces through adversarial practices: genuine acts of physical and visual hacking designed to deceive and sabotage deep learning models. Using targeted techniques such as adversarial make-up, geometric patterns (Patches), or fashion-tech anti-recognition fabrics, we can strategically alter the landmark points of the face, short-circuiting computer vision systems. These perturbations exploit the intrinsic vulnerabilities and mathematical shortcuts of neural networks: by applying specific eyeshadow to the orbital regions or blocks of visual “noise”, we transform our appearance into data illegible to the machine, restoring the opacity needed to escape the predatory capture of detectors34.

The tools released: taking back control of technology

To transform theory into a tool for struggle, we have developed and gathered open-source resources designed to test defences directly on our own devices. Try the web app Ghostmaxxing!, available at sindacato.nina.watch/ghostati. It is a testing tool that uses local recognition models to let you experience in real time the effectiveness of adversarial make-up via your webcam. Use it, study it, and fork it from our GitHub to deconstruct its mechanisms, improve the code, and create new interfaces of technological resistance5.

4. The practice: bodies, experimentation, and the call to the NINA Festival

Algorithms are not only fought on servers — they are fought on bodies. The effectiveness of these tools must be validated collectively: adversarial practice requires continuous experimentation, testing on different systems, documentation of failures, and gathering of feedback to refine techniques. To move to concrete action, we invite you to the NINA Festival in Milan, Saturday 9 May at Rob de Matt (Via Enrico Annibale Butti, 18). From 4:00 pm onwards, during the talk “Ghosted. Fashion-tech and biometric data protection”, we will lead a public workshop with Michelle Tylicki and other expert make-up artists. We will live-test make-up prototypes, record before/after results, and film the interactions for the next phase of our campaign. Join us: come to be made up, to fool the machines, and to take back your face.



  1. European Citizens’ Initiative Reclaim Your Face (2021), for the ban on mass biometric surveillance practices↩︎

  2. Privacy Network, SARI enterprise, report. ↩︎

  3. Yinpeng Dong et al. (2021), Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition↩︎

  4. Adversarial Robustness Toolbox (ART), official documentation on Spatial Evasion techniques and DPatch, IBM. ↩︎

  5. Ghostmaxxing on GitHub, vecna/ghostati↩︎