Jump to content

Fawkes (software)

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Facial_recognition_system#Techniques_for_face_recognition
Facial recognition works by pinpointing unique dimensions of facial features, which are then rendered as a vector graphic image of the face.

Fawkes is a facial image cloaking software created by the SAND (Security, Algorithms, Networking and Data) Laboratory of the University of Chicago.[1] It is a free tool that is available as a standalone executable.[2] The software creates small alterations in images using artificial intelligence to protect the images from being recognized and matched by facial recognition software.[3] The goal of the Fawkes program is to enable individuals to protect their own privacy from large data collection. As of May 2022, Fawkes v1.0 has surpassed 840,000 downloads.[4] Eventually, the SAND Laboratory hopes to implement the software on a larger scale to combat unwarranted facial recognition software.[5]

History

[edit]

The Fawkes program was named after the fictional protagonist from the movie and comic V for Vendetta, who drew inspiration from historical figure Guy Fawkes.[6] The Fawkes proposal was initially presented at a USENIX Security conference in August 2020 where it received approval and was launched shortly after. The most recent version available for download, Fawkes v1.0, was released in April 2021, and is still being updated in 2022.[4] The founding team is led by Emily Wenger and Shawn Shan, PhD students at the University of Chicago. Additional support from Jiayun Zhang and Huiying Li, with faculty advisors Ben Zhao and Heather Zheng, contributed to the creation of the software.[7] The team cites nonconsensual data collection, specifically done by such companies as Clearwater AI, as being the prime inspiration behind the creation of Fawkes.[8]

Techniques

[edit]

The methods that Fawkes uses can be identified as similar to adversarial machine learning. This method trains a facial recognition software using already altered images. This results in the software not being able to match the altered image with the actual image, as it does not recognize them as the same image. Fawkes also uses data poisoning attacks, which change the data set used to train certain deep learning models. Fawkes utilizes two types of data poisoning techniques: clean label attacks and model corruption attacks. The creators of Fawkes identify, that using sybil images can increase the effectiveness of their software against recognition software products. Sybil images are images that do not match the person they are attributed to. This confuses the facial recognition software and leads to misidientification which also helps the efficacy of image cloaking. Privacy preserving machine learning uses techniques similar to the Fawkes software but opts for differentially private model training, which helps to keep information in the data set private.[3]

Applications

[edit]

Fawkes image cloaking can be used on images and apps that are used every day. However, the efficacy of the software wanes if there are cloaked and uncloaked images that the facial recognition software can utilize. The image cloaking software has been tested on high-powered facial recognition software with varied results.[3] A similar facial cloaking software to Fawkes is called LowKey. LowKey also alters images on a visual level, but these alterations are much more noticeable compared to the Fawkes software.[2]

References

[edit]
  1. ^ James Vinvent (4 August 2020). "Cloak your photos with this AI privacy tool to fool facial recognition". The Verge. Retrieved 18 May 2021.
  2. ^ a b Ledford, B 2021, An Assessment of Image-Cloaking Techniques Against Automated Face Recognition for Biometric Privacy, Masters Thesis, Florida Institute of Technology, Melbourne Florida, viewed 27 July 27 2022, https://repository.lib.fit.edu/handle/11141/3478.
  3. ^ a b c Shan, Shawn; Wenger, Emily; Zhang, Jiayun; Li, Huiying; Zheng, Haitao; Zhao, Ben Y. (2020-06-22). "Fawkes: Protecting Privacy against Unauthorized Deep Learning Models". arXiv:2002.08327 [cs.CR].
  4. ^ a b "Fawkes". sandlab.cs.uchicago.edu. Retrieved 2022-07-28.
  5. ^ Hill, Kashmir (2020-08-03). "This Tool Could Protect Your Photos From Facial Recognition". The New York Times. ISSN 0362-4331. Retrieved 2022-07-28.
  6. ^ Grad, Peter; Xplore, Tech. "Image cloaking tool thwarts facial recognition programs". techxplore.com. Retrieved 2022-07-28.
  7. ^ "UChicago CS Researchers Create New Protection Against Facial Recognition". Department of Computer Science. Retrieved 2022-07-28.
  8. ^ Shan, Shawn; Wenger, Emily; Zhang, Jiayun; Li, Huiying; Zheng, Haitao; Zhao, Ben Y. (2020-06-22). "Fawkes: Protecting Privacy against Unauthorized Deep Learning Models". arXiv:2002.08327 [cs.CR].