The Moral Dilemma of Facial Recognition Technology

(A final research project for a Science and Technology course I took)

Facial recognition technology is a biometric technology that identifies or verifies individuals by analyzing and comparing patterns in their facial features. Facial recognition systems can be used to identify people in real-time, photos, or videos. Most people are familiar with facial recognition technology through the FaceID that is used to unlock iPhones. This technology has gained widespread adoption across various industries and applications including law enforcement, security and surveillance, marketing, and personalized user experiences. Facial recognition technology operates through a combination of machine learning techniques, computer vision algorithms, and focal biometrics to detect, extract, encode, and match facial data. It works by capturing images and/or videos of faces using cameras or other devices, then processing the video data to examine and extract unique facial features to create mathematical representations known as face templates or embeddings. These templates are then compared against a database of known faces to determine if there is a match, indicating the presence of a specific individual or verifying their identity. 

One of the biggest moral concerns surrounding facial recognition technology is its potential to infringe upon individuals’ rights to privacy. These systems are able to capture and analyze facial data without people’s consent or knowledge which leads to concerns about tracking of individuals without their consent as well as mass surveillance. Governments and law enforcement can deploy facial recognition technology for mass surveillance, often under the guise of public safety or national security. Public surveillance cameras that are equipped with facial recognition technology can capture facial data from people in public areas such as crowded streets, parks, and transportation hubs, without their knowledge, creating a pervasive atmosphere of surveillance. This technology is also often integrated in various devices and services such as smartphones, smart home devices, and even social media platforms. The applications can collect and process facial data for user authentications, personalized recommendations, and targeted advertising. Even in situations where notices are posted about the use of facial recognition, there is rarely an opportunity to opt out without forgoing access to the space or service, presenting a coercive scenario rather than a genuine choice. 

In addition to privacy concerns, bias and discrimination are significant ethical concerns associated with facial recognition that stems from inherent biases in the algorithms and data used to train these systems. Algorithmic limitations and insufficient quality of reference data also contribute to the causes of false positives and misidentifications. Facial recognition algorithms may exhibit biases based on race, gender, age, and other demographic characteristics, which can lead to inaccurate and discriminatory outcomes. These biases can perpetuate existing social inequalities, reinforce stereotypes, and disproportionately impact certain communities, exacerbating disparities within our society. Facial recognition softwares have been found to be less accurate in identifying individuals with darker skin tones which leads to higher rates of misidentifications and false positives among minority populations. This racial bias is attributed to the underrepresentation of diverse racial and ethnic groups in the training data that is used to develop these algorithms. Algorithms may struggle to accurately recognize and classify faces that deviate from the predominant racial groups in the training data, demonstrating a clear flaw in the development of this technology. False positives and misidentifications can result in individuals being wrongfully accused or arrested for crimes they did not commit. Some law enforcement agencies may rely on facial recognition technology as evidence in criminal investigations which could lead to wrongful prosecution of innocent individuals. 


A utilitarian assessment of this technology would include assessing its ethical validity based on a cost-benefit analysis, weighing both societal and individual benefits of facial recognition against the potential harms it could bring. The use of facial recognition would be justified if it results in a net positive outcome for society. It would also require the implementation of measures to minimize negative impacts, especially on vulnerable populations, to make sure the benefits outweigh the harms. From a utilitarian perspective, the lack of individual consent may be deemed acceptable in some contexts if the use of facial recognition technology significantly contributes to the public good, such as preventing or solving crime, which would enhance public safety.

As deontology focuses on the morality of action based on rules and duties rather than the consequences of those actions, deontology would emphasize the importance of individual consent and autonomy. Using someone's biometric data without explicit consent would be inherently unethical from a deontological perspective, regardless of the potential benefits. This framework would demand universal principles to govern the use of facial recognition including respect for privacy, fairness, and nondiscrimination. Any infringements on these rights by facial recognition technologies would need to be strictly justified, and mass surveillance practices would most likely be viewed as inherently problematic. Deontology would prioritize the right to privacy as a fundamental ethical duty, and would require for these principles to be applied consistently, regardless of the potential outcomes.

Rawls’ theory of justice as fairness introduces the concept of the veil of ignorance and the original position as a way to determine the justness of societal arrangements. Applying Rawls’ concept, the use of facial recognition would be evaluated from an original position behind a veil of ignorance, where no one knows their place in society. This ensures that technology is assessed impartially, leaving no bias towards any particular group. Rawls’ argues for two principles of justice: the liberty principle and the difference principle. Under the liberty principle, the use of facial recognition technology would need to not only respect but also not infringe upon an individual's rights and freedoms, including privacy and freedom of expression. The difference principle requires social and economic inequalities to be arranged so that they are reasonably expected to be to everyone’s advantage, and equally available to occupy high levels of society. In the context of this particular technology, this means that any use must consider its impact on the most disadvantaged group and ensure that it does not exacerbate existing inequalities or discrimination.

When evaluating which ethical framework best handles this moral dilemma, the framework must be able to balance this technology’s potential for both significant societal benefits as well as its ethical concerns. Rawls’ theory seems to be the best suited for handling the ethical dilemmas posed by facial recognition technology as it provides a comprehensive framework that can navigate the complexities of this technology by emphasizing equality, fairness, and individual rights. Rawls’ liberty principles would ensure that any use of facial recognition technology would respect fundamental rights and freedoms (privacy and freedom of expression). Unlike utilitarianism, which might sacrifice individual rights for the greater good, this approach ensures that basic liberties are not infringed upon, establishing a strong ethical foundation for this technology. The difference principle would demand for this technology to benefit the least advantaged members of society, addressing one of the most significant concerns with facial recognition technology: its issues with bias and its potential to exacerbate social inequalities and discrimination. By prioritizing the welfare of the least advantaged, this approach directly addresses the issues of bias and fairness in this technology’s development and use. The concepts of the original position and the veil of ignorance would encourage developers, policymakers, and society as a whole to consider the impacts of this technology on all groups, especially those who may be the most negatively affected. This framework would overall provide a stable ethical foundation that can guide the development and use of facial recognition technology while social norms and technology continue to change and advance.


An OTA approach to facial recognition technology would likely involve conducting in-depth studies on the technology’s capabilities, limitations, and potential society impacts. This could include examining the potential for bias, privacy concerns, and the implications for civil liberties. I believe the OTA would develop and provide policy recommendations to lawmakers, aiming to balance the benefits of facial recognition technology with the need to address ethical concerns and protect individual rights. Although the OTA primarily served congress, it would most likely gather input from various stakeholders such as developers, privacy advocates, and affected communities, to inform its analysis and recommendations.

A PTA approach to facial recognition would likely include actively involving citizens and communities in discussions about facial recognition technology, its uses, and its effects. This could be done through surveys or public forums. The PTA would utilize democracy to enable a diverse sampling of society to contribute and be involved in the decision making process regarding the use and regulation of this technology. It would also provide accessible information about facial recognition technology and its potential impacts to provide the public with the opportunity to be more involved with decisions about this technology.

Applying CTA to facial recognition technology would likely involve engaging stakeholders, including technology developers, ethicists, and potentially affected individuals or groups, in the design of this technology. This would aim to make sure that the technology aligns with societal values and ethical standards before the technology is fully deployed and widespread. A CTA approach would encourage a culture of reflection, prompting continuous and thorough consideration of the technology’s implications on society.


While both the OTA and PTA prove to be valuable frameworks for engaging with and analyzing this technology from a policy and public perspective, I believe that the CTA paradigm handles the multifaceted social and ethical dilemmas of facial recognition technology the best. By involving a wide range of stakeholders in the creation and design phases of the development of the technology, CTA ensures that societal values and ethical considerations are embedded in the technology from the get-go. After-the-fact regulations or public consultations may be insufficient to mitigate biases or privacy invasions, so this preemptive approach would be crucial for a technology such as facial recognition technology. The CTA also allows for flexibility in response to emerging ethical challenges and tech advancements, supporting ongoing assessment and adjustment of technology practices in line with ethical standards and social expectations. Through extensive planning and foresight, CTA can help society prepare for and shape the trajectory of this technology so that it minimizes harm and maximizes benefits. It additionally promotes a continuous ethical consideration and learning, rather than merely aiming to restrict or regulate technology. A balance of guiding the development of technology in a direction that aligns with social values while still encouraging innovation is crucial for this technology, and CTA provides a framework for just that.


A re-conceptualization of facial recognition technology to align with more ethical practices would require a multi-faceted approach that respects individual privacy, ensures fairness and equity, and is transparent and accountable. This would involve not only regulatory and technical measures, but also a cultural shift in how biometric data is perceived and used. 

Firstly, incorporating privacy consideration into the design and development phase of this technology would help to ensure that facial recognition systems collect only the data necessary for their intended purpose and retain it for the shortest time necessary. This could also include implementing advanced privacy-enhancing technologies such as differential privacy, which essentially adds noise to datasets in a way that prevents the identification of individuals without completely compromising the utility of the data. Additionally, a clear focus on data minimization would further protect individuals privacy. This would assume collecting the minimum amount of data necessary for a specific purpose.

The next step would be to require clear, explicit, informed consent from individuals before capturing or processing their facial data. This could involve opt-out mechanisms or functions in public spaces or technology that allow individuals to easily disable facial recognition for themselves, or even their children. Developing systems for dynamic consent that are flexible and context aware seems ambitious, though it would enable individuals to allow, adjust, or revoke consent based on specific contexts, time frames, or uses. Empowering individuals with control over their data also includes easy access to information on how their facial data is used, and also providing options to correct inaccuracies or delete their data.

Thirdly, transparency should also be of the utmost importance, and entities using facial recognitions should be required to disclose their use of the technology, including the purposes for which it is used and the data protection measures in place. This may involve establishing independent bodies to oversee the use of facial recognition technologies, ensuring they comply with ethical standards and legal requirements. Enacting laws that regulate the deployment of facial recognitions would put more of a focus on protecting civil liberties, preventing misuse, and ensuring accountability. This legislation would ideally go beyond general privacy laws to include mandates on accuracy, consent, transparents, and redress mechanisms for harm or misuse. 

Bias detection and correction is also very important in the re-conceptualization of this technology. Investing in research dedicated to the detection and correction of biases in facial recognition algorithms would ensure continuous improvement in fairness and accuracy. Additionally, utilizing more diverse teams and perspectives in the development stage of this technology would also help to eliminate any unconscious bias transferring to the technology. Before deploying facial recognition technologies, entities that utilize this technology should conduct thorough community impact assessments to fully understand and mitigate potential harms, especially in marginalized or vulnerable communities. 

By implementing all these measures and addressing the ethical concerns of facial recognition technology at multiple levels – utilizing techniques that may be used by the paradigms and frameworks – we can create a technology in our society that respects individual rights and promotes societal well-being. The aim of this re-conceptualization is to not only mitigate the risks and harms that facial recognition technology poses, but also to utilize the benefits of this technology in a way that is equitable, just, and aligned with the broader values of society.

 

References (Formatting error)

Almeida, Denise, et al. “The Ethics of Facial Recognition Technologies, Surveillance, and Accountability in an Age of Artificial Intelligence: A Comparative Analysis of US, EU, and UK Regulatory Frameworks.” AI and Ethics, U.S. National Library of Medicine, 2022, www.ncbi.nlm.nih.gov/pmc/articles/PMC8320316/.

An Ethical Framework for Facial Recognition, NTIA Multistakeholder Process on Facial Recognition Technology, www.ntia.doc.gov/files/ntia/publications/aclu_an_ethical_framework_for_face_recognition.pdf. Accessed 22 Mar. 2024.

Gray, Patrick. “Ethical Issues of Facial Recognition Technology.” TechRepublic, 31 Aug. 2022, www.techrepublic.com/article/ethical-issues-facial-recognition/.

Ji, Jiazhen, et al. “Privacy-Preserving Face Recognition with Learnable Privacy Budgets in Frequency Domain.” arXiv.Org, 19 July 2022, arxiv.org/abs/2207.07316.

Kaspersky. “What Is Facial Recognition – Definition and Explanation.” Usa.Kaspersky.Com, 14 Mar. 2024, usa.kaspersky.com/resource-center/definitions/what-is-facial-recognition.

Lewis, James Andrew, and William Crumpler. “How Does Facial Recognition Work?” CSIS, www.csis.org/analysis/how-does-facial-recognition-work. Accessed 21 Mar. 2024.

Previous
Previous

“Innocence Under Surveillance” – A Short Story

Next
Next

Planned Obsolescence: Reflecting on Technology, Sustainability, and Consumer Culture