The creators of the tool are presenting their work at Usenix Security and have published a paper explaining their methodology, which they call "cloaking."
To my semi-tutored eye, the paper is impressive. The authors validate their tool by cloaking images of one another and testing them on leading facial recog tools, and find that they can trick these systems 100% of the time.
Importantly, they also devote extensive discussion to countermeasures and set out reasons that they believe that facial recognition system developers will struggle to defeat cloaking - or even detect when cloaking has been used.
But there's at least one red flag here: the authors warn that they are seeking patents on their work and advise that their tool is only for researchers seeking to evaluate it.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!