Ubiquitous facial recognition is a critical threat to privacy. The idea that companies collect the photos that we share is worrying. They do this in order to train algorithms, and then they sell it commercially. Anyone can snap a picture of a stranger, buy those tools, and find out who they are in seconds. Nevertheless, researchers have come up with a smart way to help combat this problem. Fawkes is the name for the solution. Scientists at the University of Chicago’s Sand Lab. created the tool. Fawkes is named after the Guy Fawkes masks donned by revolutionaries in the V for Vendetta film and graphic novel. Fawkes is using artificial intelligence to subtly and almost imperceptibly alter your photos to trick systems of facial recognition.
The way the software works is a little bit complex. Running your photos with Fawkes does not make you invisible to facial recognition exactly. The software is making subtle changes to your photos. Thus, any algorithm scanning those images in the future sees you as a different person altogether. So, to run Fawkes on your photos is like adding an invisible mask to your selfies. Scientists named this process ‘cloaking.’ It intends to corrupt the resource of the facial recognition systems, which function based on databases scraped from social media. For example, Clearview AI, a facial recognition firm, claims to have collected some three billion images of faces from sites like Venmo, Facebook, and Youtube, which it uses to identify strangers. But in case the photos you share online have been run through Fawkes, according to the researchers, then the algorithms would not recognize the face on the photo as your own.
Fawkes is 100% percent successful against state-of-the-art facial recognition services from Face++ by Chinese tech giant Megvii, Rekognition (Amazon), and Azure Face (Microsoft). Ben Zhao works at the University of Chicago. He is a professor of computer science there. He told the verge that what they are doing is using the cloaked photo in essence like a Trojan Horse. It introduces unauthorized models to lead the algorithm to analyze the wrong aspects of what makes you look like you and not someone else. Moreover, Zhao added that once the corruption happens, you are continuously protected no matter where you go or are seen.
Ben. Y. Zhao, Haitao Zhen, Huiying Li, Jiayun Zhang, Emily Wenger, and Shawn Shan are the group behind this work. Earlier this year, they published a paper on the algorithm. Nevertheless, late last month, they also released Fawkes as free software for Macs and Windows that anyone can use and download. Today they say that it has been download more than 100,000 times. Our tests showed that Fawkes is sparse in its design. Nevertheless, it is easy enough to apply. It needs a couple of minutes to process each image. Moreover, the changes it makes are mostly imperceptible. The New York Times published a story on Fawkes earlier this week. It said that the cloaking effect was quite evident. It often made gendered changes to images like giving women mustaches. Nevertheless, the Fawkes team says that the updated algorithm is much more subtle.