In the wrong hands, facial recognition technology can have a chilling effect on freedom of movement and association. Face recognition is one of the most powerful and complex technologies of the modern era. The concept of privacy is fundamental to the American way of life. At the same time, citizens want law enforcement to have access to the best tools to make their neighborhoods safe. Facial recognition technology stands at the intersection of the “privacy versus security” debate, and experts on both sides of the issue present compelling arguments. Technology in this area continues to improve while policymakers and the courts grapple with the acceptable legal limits of enhanced surveillance.
Literature Review Assessment
An assessment of the literature related to facial recognition policy reveals several common themes and assumptions as well as several weaknesses and limitations that could have implications for future research. Facial recognition is an important research problem cutting across several fields and disciplines. This is because, in addition to having many practical applications such as access control and surveillance, facial recognition is a fundamental human behavior essential for effective communication and interaction (Tolba, 2005).
Themes, Assumptions, and Approaches
Much of the literature touts facial recognition as a new emerging technology, however humans have been recognizing each other for thousands of years—in the caves and later in the cubicles. As with all “new” technology, initial thoughts have to do with safety. Because the technology (unlike a self-driving car) cannot directly injure or kill anyone, the wave of literature pushes out to other potential harms. Three primary themes emerge within the literature: 1) potential abuse by (or benefit to) law enforcement; 2) privacy implications for (presumably innocent) individuals; and 3) a somewhat ill-defined fear of machines operating autonomously to enslave humanity.
Facial recognition technology is of interest to both scientists (technical) and sociologists (generally non-technical). As such, technical reports (Grother, 2019) have been misinterpreted and have led to questionable claims including the racist nature of facial recognition technology. All technology is neutral but, of course, can be used for either good or ill. As the Bible admonishes those who “Hav[e] the understanding darkened, being alienated from the life of God through the ignorance that is in them, because of the blindness of their heart” (Ephesians 4:18, KJV).
Facial images (aka “mugshots”) have been tied to criminality since the 1880s when French police began using them to help the public identify suspects (Cramer, 2020). Because of their association with law enforcement, opponents of the police have attempted to make the issue political instead of viewing what is basically a photo as just another way to generate leads. The photo is one small part of an overall case and may be used to associate an individual with a particular crime. But unlike other discrete data points like shoe size, color of clothing, and the license plate of the getaway car, a photo of a suspect is viewed by some (mostly on the political left) as something so private and controversial that it should not be publicly revealed in order to help solve a crime (Crump, 2021).
There are clear privacy implications associated with facial images. Nearly all the literature mentioned privacy but mostly only in passing, because it is a complex legal topic and requires more nuanced research than most authors care to conduct. Even anecdotally, however, it is clear there are privacy issues related to the face. For example, many people do not like to be photographed for any number of reasons known only to them. Is the person a wanted felon on the lam? Maybe a spy? It is true that a picture is worth a thousand words because a facial image reveals so much about an individual. For example, a picture reveals race, approximate age, religion (hijab), and whether someone has bad eyesight (wearing glasses). When compared in context, a picture can reveal much more (e.g., “Where were you, and what were you doing on the night of the murder?)
The literature notes that some policymakers are skeptical (Zakrzewski, 2020) of the technology, and this skepticism is exacerbated by the fact that the algorithms are often trained using artificial intelligence (AI), another complex technology that is not well understood. This is not to say that the technology cannot be abused when in the wrong hands. In fact, China is the biggest abuser of facial recognition technology, used in combination with other technologies, to crack down on political dissent and enslave the minority Uighur population (Andersen, 2020).
An assumption in much of the literature is that the face is a special kind of biometric, somehow more concerning than fingerprints and DNA, although those modalities are also not without controversy. There is actually something special about the face in that it is generally openly observable and recognizable. While hands and fingers are also generally observable, most of us do not recognize our family members and friends by looking first at their hands. This was highlighted when the psalmist so eloquently stated “When thou saidst, Seek ye my face; my heart said unto thee, Thy face, Lord, will I seek” (Psalm 27:8, KJV).
Weaknesses and Limitations
One topic that is almost entirely missing from the literature is the relationship between technology, public policy, and Christian ethics. Facial recognition technology had clear practical limits until recently, because it would have taken much longer to manually identify one individual from a stack of a billion photos. However, after each photo is converted into a mathematical representation and packaged into a small file, computers can (relatively) quickly make comparisons and provide (mostly) accurate results. However, just because we can do something does not mean we should, such as collecting and comparing images collected on every sidewalk or at every traffic light in hopes of finding a jaywalker. At some point, the cost to society outweighs any benefit. Do we really want people to stay indoors continually because they are afraid of getting charged with a petty crime? In other words, is it ethical to use a given technology to assist in instilling fear in the population?
Those commissioning the collection and funding the algorithm research hope to achieve a certain return on investment, and one must question the ultimate goal. I would argue that for most researchers, the goal is simply better and more accurate results without much thought to the endgame. For those at the top of the pyramid, facial recognition represents an efficient means of not only tracking everyone but also positively identifying someone who seeks to obscure his true identity. Phones are currently used to track everyone (Dixon, 2020) but, in its current form factor, a phone can be given to someone else or discarded. Someone can choose to not use a phone. However, someone can not choose to have a face, which makes it difficult to avoid unwanted detection and tracking.
Much of the literature put forth by civil libertarians argues that the technology is wildly inaccurate (CPT, 2017)—or worse—inherently racist (Laperrurue, 2017). Technology is constantly improving, and the algorithms will become more accurate over time as more data is ingested. This is simply the nature of AI and facial recognition algorithms. To quickly address the racism issue, China has some of the best algorithms because they have limited human rights and forced enrollment. Also, the government has access to all of the country’s private holdings, whereas most of the U.S. data is in private hands and also would be expensive to replicate because of informed consent. So, yes, facial recognition is racist—when used by the Chinese government to target ethnic minorities.
Another weakness of the existing literature is that the authors discussing policy do not understand the technology while the authors who understand the technology are not in the business of making policy suggestions. Rather, these people are researchers who are looking at the data from a neutral standpoint and genuinely want to see the technology progress. My assessment of the literature is that the technologists are more independent, more neutral, and—dare I say—more ethical than those making policy assessments and suggestions. I believe this is due, in part, to the funding mechanisms. Wealthy individuals can found think tanks and non-governmental organizations and create fellowships to promote their world view, all the while enjoying the related tax benefits. Government research, on the other hand, is publicly funded (NIJ, 2020) and has several layers of oversight.
Implications for Future Research
There are no conclusions common to all the sources, perhaps because the technical implementation of the technology has been so clunky and fraught with risk from the beginning such that everyone came early to their own conclusions. This actually highlights the need for coordination and standards, which is within the purview of the National Institute of Standards and Technology (NIST). There is a time for corporate secrecy, but the field has greatly benefitted from the open collaboration among public and private stakeholders and has driven down the cost of the technology such that society can now afford to reap the benefits (assuming there are some).
Regarding errors and oversights, some of the literature suggests the need for federal intervention and nation-wide mandates. However, I see value in letting various states approach the issue in various ways so that bad policy does not infect the whole nation but rather only a sliver. For example, Illinois implemented the Biometric Information Privacy Act (BIPA) in 2008. While the law was intended to guard against the unlawful collection and storing of biometric information, it has mainly just caused confusion that has bled into the courts (Germain, 2020).
Facial recognition has been much debated in public policy circles, and this is expected to continue as policymakers, the courts, and society writ large grapple with establishing boundaries regarding personal bodily integrity when these intersect with legitimate state needs for personal welfare and public order. There will always be those who exploit technology for personal gain and illegitimate social control. This is one reason why some argue that additional safeguards must be implemented for facial recognition technology. However, there is no distinguishing good technology from bad technology but rather only malintent from virtuous intent.
Andersen, Ross. “The Panopticon is Already Here.” The Atlantic (2020). https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/
Baker, Brett. M. “Framework for Grant Oversight.” National Science Foundation International Workshop on Accountability in Science and Research Funding (2012). https://www.nsf.gov/oig/_pdf/presentations/intl_workshops/paris2012/3baker.pdf
Blackburn, Duane. “Biometric Face Recognition: References for Policymakers” (2020) https://www.mitre.org/publications/technical-papers/biometric-face-recognition-references-for-policymakers
Congressional Research Service. “Facial Recognition Technology and Law Enforcement: Select Constitutional Considerations” (2020). https://crsreports.congress.gov/product/pdf/R/R46541 Cramer, Maria Cramer. “The Mug Shot, a Crime Story Staple, Is Dropped by Some Newsrooms and Police.” New York Times (2020). https://www.nytimes.com/2020/07/03/us/mugshot-san-francisco-police.html
Crump, James. “Teen Sets Bus Passenger’s Hair on Fire in Shocking Video” (2021). Newsweek. https://www.newsweek.com/san-francisco-bus-teen-passenger-hair-set-fire-1597572
Davis, Wendy N. “Face Time; Facial Recognition Technology Helps Nab Criminals—and Raises Privacy Concerns.” ABA Journal 103, no. 10 (2017): 16-8. https://www.jstor.org/stable/26516097.
Department of Homeland Security. “Report 2019-01 of the DHS Data Privacy and Integrity Advisory Committee (DPIAC): Privacy Recommendations in Connection with the Use of Facial Recognition Technology” (2019). https://www.dhs.gov/sites/default/files/publications/Report%202019-01_Use%20of%20Facial%20Recognition%20Technology_02%2026%202019.pdf
Diaz, Danielle. “Technology and the Challenges Facing the Fourth Amendment.” M.S., Utica College (2018). Criminal Justice Database, ProQuest Central, http://ezproxy.liberty.edu/login?qurl=https%3A%2F%2Fwww.proquest.com%2Fdissertations-theses%2Ftechnology-challenges-facing-fourth-amendment%2Fdocview%2F2037205127%2Fse-2%3Faccountid%3D12085.
Dixon, Herbert T. “Your Cell Phone Is a Spy” (2020). American Bar Association. https://www.americanbar.org/groups/judicial/publications/judges_journal/2020/summer/your-cell-phone-a-spy/
Georgetown Law Center on Privacy and Technology. “Not Ready for Takeoff: Face Scans at Airport Departure Gates” (2017). https://www.airportfacescans.com/sites/default/files/Biometrics_Report__Not_Ready_For_Takeoff.pdf
Germain, Thomas. “Why Illinois Has Become a Battleground for Facial Recognition Protection.” Consumer Reports (2020). https://www.consumerreports.org/privacy/why-illinois-has-become-a-battleground-for-facial-recognition-protection/
Gilg, Deborah. “Know Your Rights: A guide to the United States Constitution,” Department of Justice (2021). https://www.justice.gov/sites/default/files/usaone/legacy/2012/04/27/Civil%20Rights%20Book-NE-2.pdf
Goodwin, Gretta L. “Face Recognition Technology: DOJ and FBI Have Taken Some Actions in Response to GAO Recommendations to Ensure Privacy and Accuracy, But Additional Work Remains.” Government Accountability Office (2019). https://www.gao.gov/products/gao-19-579t
Government Accountability Office. “Privacy and Accuracy Issues Related to Commercial Uses,” GAO-20-522 (2020). https://www.gao.gov/assets/gao-20-522.pdf
Grother, Patrick, Mei Ngan, and Kayee Hanaoka. “Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects.” National Institute of Standards and Technology (2019). https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf
Jalaluddin, Arooj Zafar. “An Exploration of Countermeasures to Defend Against Weaponized AI Malware Exploiting Facial Recognition.” D.Sc., Capitol Technology University (2020). ProQuest Central, http://ezproxy.liberty.edu/login?qurl=https%3A%2F%2Fwww.proquest.com%2Fdissertations-theses%2Fexploration-countermeasures-defend-against%2Fdocview%2F2446975948%2Fse-2%3Faccountid%3D12085.
Laperrurue, Jake, “Preserving the Right to Obscurity in the Age of Facial Recognition,” The Century Foundation (2017). https://tcf.org/content/report/preserving-right-obscurity-age-facial-recognition/?agreed=1
Merlano, Shari. “Privacy Concerns regarding the use of Biometrics in Trusted TravelerPrograms.” Ph.D., Walden University, 2016 Political Science Database, ProQuest Central, http://ezproxy.liberty.edu/login?qurl=https%3A%2F%2Fwww.proquest.com%2Fdissertations-theses%2Fprivacy-concerns-regarding-use-biometrics-trusted%2Fdocview%2F1847950014%2Fse-2%3Faccountid%3D12085.
National Institute of Justice. “History of NIJ Support for Face Recognition Technology” (2020). https://nij.ojp.gov/topics/articles/history-nij-support-face-recognition-technology
Tolba, Ahmad, Ali El-Baz, and Ahmed A El-Harby. “Face Recognition: A Literature Review” (2005). https://www.researchgate.net/publication/233864740_Face_Recognition_A_Literature_Review
Yeung, Douglas, Rebecca Balebako, Carlos Ignacio Gutierrez Gaviria, and Michael Chaykowsky, “Face Recognition Technologies: Designing Systems that Protect Privacy and Prevent Bias.” Homeland Security Operational Analysis Center operated by the RAND Corporation, 2020. https://www.rand.org/pubs/research_reports/RR4226.html.
Zakrzewski, Cat. “The Technology 202: Facial Recognition Gets Another Look on Capitol HillToday From Skeptical Lawmakers.” Washington Post (2020). https://www.washingtonpost.com/news/powerpost/paloma/the-technology-202/2020/01/15/the-technology-202-facial-recognition-gets-another-look-on-capitol-hill-today-from-skeptical-lawmakers/5e1dfc4588e0fa2262dcd2b5/