Fawkes is a new anti-facial recognition tool from University of Chicago Sand Lab. It subtly alters faces in images so that that they cannot be correctly classified by common machine learning classifiers, while leaving them legible to human viewers.
These small changes - adversarial preturbations - merge features from other peoples' pictures with your own to cause classifiers to misfire, mistaking pictures of you for pictures of your "masking" target.
Importantly, they also devote extensive discussion to countermeasures and set out reasons that they believe that facial recognition system developers will struggle to defeat cloaking - or even detect when cloaking has been used.
But there's at least one red flag here: the authors warn that they are seeking patents on their work and advise that their tool is only for researchers seeking to evaluate it.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!