Anonymize photos to protect BLM protesters


Upload or drop your image here





More examples from @BLMPrivacyBot. This tool is free and open source.


Arrests from public protest images discourages protests.
Over the past weeks, we have seen an increasing number of arrests at BLM protests, with images circulating around the web enabling automatic identification of those individuals and subsequent arrests to hamper protest activity. This primarily concerns social media protest images.

Numerous applications have emerged in response to this threat that aim to anonymize protest images and enable people to continue protesting in safety. Of course, this would require a shift on the public's part to recognize this issue and an easy and effective method for anonymization to surface. In an ideal world, platforms like Twitter would enable an on-platform solution. Unfortunately, to beat facial recognition, blurring faces is not enough.
So what's your goal? AI to help alleviate some of the worst parts of AI...
The goal of this work is to leverage our group's knowledge of facial recognition AI to offer the most effective anonymization tool, which evades state of the art facial recognition technology.

Some technical considerations
AI facial recognition models can recognize blurred faces. This work tries to discourage people from trying to recognize or reconstruct pixelated faces by masking people with an opaque mask. We use the BLM fist emoji as that mask for solidarity. While posting anonymized images does not delete the originals, we are starting with awareness and hope Twitter and other platforms would offer an on-platform solution (might be a tall order, but one can hope).

Importantly, this application does not save images. We hope the transparency of this open source repository will allow for community input. The Twitter bot posts anonymized images based on the Fair Use policy; however, if your image is used and you'd like it to be taken down, we will do our best to do so immediately.
Limits of facial recognition
Past research has demonstrated that facial recognition technologies are biased. When 3 commercial facial recognition technologies were examined for bias, researchers found that darker-skinned females were the most misclassified group (with error rates of up to 34.7%), while the maximum error rate for lighter-skinned males is 0.8%. The ACLU also tested Amazon's facial recognition software, and found that the software incorrectly recognized 28 members of Congress as other people who have been arrested. More recently, Homeland Security appears worried that facial recognition algorithms will have error rates on people wearing masks.

So, sometimes facial recognition is bad at identifying people...isn't that a good thing? Unfortunately, the use of facial recognition by police and the government is largely unregulated in America, and people have been wrongfully arrested for crimes they didn't commit because of incorrect facial recognition matches.

Blocking out the face offers a great form of anonymization; nevertheless, this cannot be mistaken for complete foolproof anonymity, e.g. if someone is wearing a t-shirt with their SSN or if they are not anonymized in another image and identity could be triangulated through similar clothing and surroundings. It may be self-evident, but this tool is not perfect, so please lend a careful and caring eye to ensuring faces are masked before sharing photos.
Some social considerations
As is the case with any technological solution, we recognize that our approach cannot fully solve this very complex social issue. We sincerely hope that our tool is a step in the right direction and is not used for something malicious. However, while we gave this careful thought, we recognize that we may not have considered every possible scenario and are open to addressing any concerns you may have with the technology.
FAQ
How can AI models still recognize blurred faces, even if they cannot reconstruct them perfectly? Recognition is different from reconstruction. Facial recognition technology can still identify many blurred faces and is better than humans at it. Reconstruction is a much more arduous task (see the difference between discriminative and generative models, if you're curious). Reconstruction has recently been exposed to be very biased (see lessons from PULSE).

Blurring faces has the added threat of encouraging certain people or groups to de-anonymize images through reconstruction or directly identifying individuals through recognition.

Do you save any images? No. The goal of this tool is to protect your privacy and saving the images would be antithetical to that. We don’t save any images you give us or any of the anonymized images created from the AI model (sometimes they’re not perfect, so saving them would still not be great!). If you like technical details: the image is passed into the AI model on the cloud, then the output is passed back and directly displayed in a base64 jpg on your screen.

The bot tweeted my image with the BLM fist emoji on it. Can you take it down? Yes, absolutely. Please DM the bot or reply directly.

Can you talk a bit more about your AI technical approach? Yes! The open-source repo is here. We build on state-of-the-art crowd counting AI, because it offers huge advantages to anonymizing crowds over traditional facial recognition models. Traditional methods can only find a few (less than 20 or even less than 5) in a single image. Crowds of BLM protesters can number in the hundreds and thousands, and certainly around 50, in a single image. The model we use in this work has been trained on over 1.2 million people in the open-sourced research dataset, called QNRF, with crowds ranging from the few to the the thousands. False negatives are the worst error in our case.

How does the bot work? We curate weekly sets of images from Twitter, anonymize them, and post to twitter through @BLMPrivacyBot. To be clear, we do not automatically post images uploaded here.

Other amazing tools

We would love to showcase other parallel efforts (please propose any we have missed here!). Not only that, if this is not the tool for you, please check these tools out too:

- Image Scrubber
- Censr (iOS and Android app)
- And more...

Brought to you by Stanford Machine Learning researchers Sharon Zhou, JQ, and Krishna Patel.

Please contact us at blm@cs.stanford.edu if you have any questions! If you are a Stanford activist organization that would like to broadcast a broader BLM message at this domain, please let us know.

Thank you to Aurelia Augusta, Andrey Kurenkov, Cody Coleman, Katie Schluntz, Timnit Gebru, Jane E, Adji Dieng, Jeremy Nixon, Samee Ibraheem, and Jean Betterton for their helpful advice and encouragement!