Follow

Fawkes is a new anti-facial recognition tool from University of Chicago Sand Lab. It subtly alters faces in images so that that they cannot be correctly classified by common machine learning classifiers, while leaving them legible to human viewers.

sandlab.cs.uchicago.edu/fawkes

These small changes - adversarial preturbations - merge features from other peoples' pictures with your own to cause classifiers to misfire, mistaking pictures of you for pictures of your "masking" target.

1/

The creators of the tool are presenting their work at Usenix Security and have published a paper explaining their methodology, which they call "cloaking."

people.cs.uchicago.edu/%7Erave

To my semi-tutored eye, the paper is impressive. The authors validate their tool by cloaking images of one another and testing them on leading facial recog tools, and find that they can trick these systems 100% of the time.

2/

Show thread

Importantly, they also devote extensive discussion to countermeasures and set out reasons that they believe that facial recognition system developers will struggle to defeat cloaking - or even detect when cloaking has been used.

But there's at least one red flag here: the authors warn that they are seeking patents on their work and advise that their tool is only for researchers seeking to evaluate it.

eof/

Show thread
Sign in to participate in the conversation
La Quadrature du Net - Mastodon - Media Fédéré

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!