The Initiatives
The four preliminary initiatives of NINA serve to road-test our practices and the growth of the group. To demonstrate our ambitions and our capacity to make an impact on contemporary technopolitics.
Really Human Resources
AI has infiltrated the personnel selection process, gamifying it
Personnel selection has been transformed into a calculation problem: the assumption is that a life is measurable and comparable. The curriculum vitae is the mould of this idea. It forces workers to compress their history into keywords, roles, results, and dates. Everything that does not fit into a list (contradictions, care contexts, irregular paths) must be removed. Empty sections are a flaw — so what? Let us think outside the box…
Ghostmaxxing!
Experiments in adversarial disguise to deceive facial recognition
The implementation of facial recognition in public and private spaces represents today one of the most insidious and pervasive threats to our civil liberties. This technology, imposed from above and deployed without any real democratic consent or transparency, transforms our bodies and our features into extractable commodities, feeding a mass surveillance infrastructure that normalises institutional control throughout Europe. We have tried to resist through institutional channels — and having seen their limits, we are beginning to experiment with self-defence practices…
How Much Does Facebook Owe You?
Experimenting with new forms of negotiation for digital labour
You work for social media: you are their source of revenue. The more people are pigeonholed as users, the more these platforms gain in value, influence, and power. Let us have this value recognised. Let us start by calculating how much money Facebook owes us (and the other great platforms of exploitation). It is clear that Meta and no platform considers itself to owe anything to anyone. But this does not mean that pressure cannot be applied — and for that we need to be many!
Bonifacio VIII
A disenchantment device against AI safety-washing
Our objective is to unmask the illusion of commercial algorithmic security by releasing Bonifacio VIII: an open-source language model stripped of any cosmetic filter, executable locally and fully inspectable. Conceived as a genuine “negative pedagogical device”, Bonifacio VIII is not designed to be yet another polite and edifying assistant, but rather to expose the grammar of abuse and the capabilities that domesticated interfaces conceal. We want to provide activists, researchers, and civil society with a cognitive and political stress test to demonstrate that generative models contain capabilities that cannot be made safe through simple interface barriers.