Cornell Current Club

View Original

Facial Recognition Software: Testing Ethical Limits

By Ben Ladabaum '20

Facial recognition software is one of the most controversial scientific developments of our time. Proponents of the technology point to the promise of increased protection against terrorism and crime and increased consumer convenience, but critics worry that it dangerously crosses ethical boundaries, particularly with regards to privacy and profiling.

In China, where the government funded a $1.4 billion facial recognition project, dozens of suspected criminals were caught using facial recognition technology at a beer festival, including people who had been on the run from authorities for years. Similarly, in response to a terrorist attack in Berlin that killed 12 people, the German government began testing facial recognition software at train stations. Their goal: to identify suspects through train station cameras and notify the police.

Facial recognition also has large potential in commercial markets. For example, Chinese fast-food companies have partnered with Alipay, a mobile payment firm, to allow people to pay for their order based on the picture on their Alipay mobile account. In addition, China Southern Airlines has already used facial recognition instead of boarding passes.

The ethical concerns with regards to privacy and racial profiling have been fierce, but so far little has been done to regulate the use of facial recognition software.

In an attempt to highlight the potential for abusing this technology, two scientists from Stanford University developed an algorithm that predicted an individual’s sexual orientation with 81% accuracy based solely on a picture of his or her face. Illuminating the risks of facial recognition, the technology was not developed by the scientists themselves, but rather borrowed from existing facial recognition technology. While undoubtedly controversial, this study reveals the ease with which private information could be acquired without knowledge or consent.

Facial recognition software will likely bring the debate about privacy to the forefront of public discourse within the next several years. Who should have access to facial recognition databases? The government? Companies? Nobody? Is it still racial profiling a computer singles out individuals based on empirical data and without influence from stereotypes and societal biases? These are among the questions that will have to be addressed in the coming years. However, in a world where policy tends to lag behind innovation, there is likely to be uncertainty surrounding the boundaries of facial recognition for some time.

For me, facial recognition technology is advancing at a frightening pace, and I worry that not enough thought has been given to the potential consequences. Until further risk-benefit analysis is conducted, it should only be used if it is shown to have a significant effect on public safety, and private corporations should not be allowed use facial recognition. Right now, the increased consumer convenience, such as the ability to pay without taking out your wallet, is insignificant compared to the potential for abuse of the technology.