Will Banning Facial Recognition Solve Our Surveillance Problems?

People sitting in chairs wearing Halloween and carnival masks.
People attend a meeting wearing masks to protest the police department's use of facial recognition technology in Detroit, Michigan, on July 11, 2019. © Jim West/ZUMA/Newscom

This month, IBM, Microsoft, and Amazon made headlines by announcing that they were temporarily limiting the sale of their facial recognition technologies to police in the U.S., which came amidst mounting pressure to end racial inequality and police violence in the U.S. and beyond

Critics questioned the three corporations on whether the announcement was a public relations stunt and several called for an end of the software use altogether. Since then, Congress has introduced legislation to limit facial recognition and other remote biometric recognition technology.

For years, civil rights organizations and artificial intelligence experts have expressed concern that when police use facial recognition, they target people of color more often than they target white people – further fueling racial profiling. Research shows that developers built and trained their algorithms within a racially biased system, causing Asian and African American people to be up to 100 times more likely to be misidentified than white men. Further, regardless of its accuracy, facial recognition technology is more likely to be used by police to target minority communities in the same ways they always have been targeted.

The reality is that ending facial recognition technology temporarily or forever will not solve the full scale of the mass surveillance problem. Without substantial changes in the industry and much-needed government regulation, data-driven policing will simply continue by other means other than facial recognition and outside of the traditional bounds of law enforcement. Experts warn that mass surveillance is growing in other government systems, like border control, social welfare, and criminal justice systems.

Many governments around the world claim that the use of new biometric and automated technologies is aimed at solving a straightforward problem—the desire to centralize and streamline information in inefficient bureaucracies. But when personal data is linked together in centralized systems, there is more opportunity for surveillance and profiling by governments.   

Even consider the data my own government already has on me in databases held by different departments: tax returns, driver’s licenses voting records, border and immigration records to name a few. Should this information need to be linked together and become accessible to different departments? Will your background and skin color possibly affect how easily you can shrug off the eerie feeling of becoming so fully visible? Absolutely.

In this Black Mirror reality, we are moving towards a data infrastructure where more and more of our information will be available at the click of a button. In some cases, governments are considering a card or universal ID number that links back to a central database to access government services like social benefits and even private services like opening a bank account. Individuals need assurance that their data is secure and not used beyond its originally stated purpose, both by governments and private, often foreign, companies who develop these systems.

Developers and vendors are making millions off these newly centralized surveillance systems, growing massive wealth by exploiting a mishmash of regulatory standards around the world, most often in developing economies with weaker state capacity in the Global South. Multinational companies sell the technology under the banner of international development and foreign aid, but they run on the same basic logic as the law enforcement and immigration-driven surveillance technologies that have been quietly implemented in the United States. Many developers are well-known corporate actors, including weapons manufacturers that already face frequent allegations of corruption.

Once these automated systems are in place, it is hard to put them back into the box.  India’s digital ID system Aadhaar (meaning “foundation” in English) was designed to give every resident a unique, verifiable ID number that would be used in all interactions with the state and for private transactions such as opening bank accounts and buying Sim cards for cellphones. In 2018, India’s Supreme Court found the system to be constitutionally valid, but they limited its mandatory nature and held that it that it would require a robust data protection law.

Less than a year later, India’s Assam region was in the news for implementing a “National Register of Citizens” (NRC) process, which threatens to strip approximately two million people of their citizenship and risks leaving them stateless. It was later revealed that the NRC and Aadhaar databases would be linked to track “suspected illegal foreigners” and “restrict their access to government documents and services.” This would allow the regional authorities in Assam to profile extremely vulnerable individuals who were accessing basic goods and services and use the information against them. From being a tool to streamline access to state welfare systems, Aadhaar became an authoritarian tool of suppression and exclusion.

In 2018, almost ten years after India’s Aadhaar system was first announced, the Kenyan government unveiled its plans for a similar, central biometric digital ID system called the “National Integrated Identity Management System” (NIIMS).  IDEMIA, a French company, sold Kenya biometric kits to collect data for NIIMS. The system risks excluding millions of Kenyans who already struggle with access to identity documents on the basis of their ethnicity. In a country rife with ethnic tension and power struggles, the effects such systems have on particular ethnic communities are no coincidence, but instead, rooted in historical injustices. In a 2019 lawsuit, one expert testified that testing experimental technologies out at scale in Kenyan society and then deciding how to regulate them would spell disaster, because “the law can’t fix what technology has broken.”

Also in 2018, the United States, the Department of Homeland Security (DHS) quietly announced the “Homeland Advanced Recognition Technology” (HART) – a database that will include multiple forms of biometric and biographic information on citizens and foreigners in the U.S. and will share information with state and local law enforcement and foreign governments. At the time of its announcement, it was set to be the second largest biometric database in the world, after India’s Aadhaar system. HART’s size and scale presents fundamental questions about how the U.S. government will use the database, and how long U.S. residents have until they get their own NIIMS or Aadhaar system. Discrimination under such a system is almost certain.  

Privacy and surveillance issues, and the technology that drives them forward, are not neutral in cause or effect. Discrimination and racism are not glitches that can be ironed out with a better algorithm. Even IBM, Microsoft, and Amazon recognize that their technology is reinforcing and expanding the structural problems inherent in all our societies.

Data rights groups, privacy, and equality advocates must unite across disciplines, local and transnational action, lived experiences, and borders to recognize and highlight the ways in which these technologies exacerbate existing structural inequalities of marginalized and vulnerable communities and meet the challenges of realizing a just society in the midst of rapid change.

Read more

Get In Touch

Contact Us

Subscribe for Updates About Our Work

By entering your email address and clicking “Submit,” you agree to receive updates from the Open Society Justice Initiative about our work. To learn more about how we use and protect your personal data, please view our privacy policy.