Fix bias in facial recognition technology
Rep. Elijah Cummings believes his face is probably in some police database somewhere. That’s because Baltimore officers used facial recognition technology to identify those who committed crimes during protests after the death of Freddie Gray while in police custody four years ago. The longtime West Baltimore resident didn’t protest, but he was among community leaders marching in peace and trying to calm down residents. Whether he liked it or not, his face was visible to cameras throughout the neighborhood and, therefore, to the police.
Use of people’s images without their knowledge is one of the red flags about facial recognition technology — which has been adopted enthusiastically and rapidly with no rules or regulations — raised during a recent hearing House Oversight Committee called by Chairman Cummings. Photos of nearly half of all Americans are already stocked in some law enforcement database, and most people don’t even know it. Even more troubling is that studies have found the technology is not as accurate on women or darker skinned people as it is on white men, opening the door for misidentification and false arrests.
In a test last year by the ACLU, Amazon’s Rekognition tool incorrectly identified 28 members of Congress as somebody who had been arrested for a crime when putting their photos into a search tool with 25,000 arrest photos. The tool disproportionately misidentified lawmakers of color, including six members of the Congressional Black Caucus. Nearly 40 percent of the false matches were of people of color, while they make up only 20 percent of Congress. MIT Media Lab researchers also tested Amazon’s technology, which is being heavily marketed to police departments and other law enforcement agencies as well as being tested by the FBI and found that it got the gender of darker-skinned women wrong 30 percent of the time and correctly identified the gender of light-skinned people in almost every instance. (An Amazon representative told The Washington Post the company disputed the findings and the way the research was conducted.) Studies have found inaccuracies with other systems as well.
People from both sides of the political aisle agree something needs to be done to rein in the unabashed use of facial recognition, given the civil liberties concerns it raises. Civil liberties and activist groups like the ACLU are pushing for a ban of the technology because of the concerns over racial bias and privacy. A bill with bipartisan support was introduced in the U.S. Senate in March to institute such a ban, while San Francisco became the first major city in the United States to ban facial technology use by the government earlier this year. Other cities, including Oakland, California, and Somerville, Massachusetts, also have legislation to prohibit use of the technology.
We believe the concerns are valid and disturbing and that something must be done to protect people’s rights and prevent the country from becoming a surveillance state. An all-out ban is not necessarily the right answer. The technology is so widespread that it might be hard to stop its use at this point. It’s not only police who use the technology but airports, retailers and other companies. The technology can also be a useful law enforcement tool if used correctly. Authorities used facial recognition to help identify the suspect in mass shooting at the Capital Gazette newspaper last June. Jarrod Warren Ramos was uncooperative, and in an effort to quickly identify him, Anne Arundel County police ran his photo through the Maryland Image Repository System to determine who he was. The technology compared the image of his face against millions of stored photographs from the state and federal repositories and measured his features against the dimensions of others’ faces to find potential matches. Mr. Ramos has entered a plea of not criminally responsible and still faces trial in the case.
But the technology should be regulated in some way by the government because right now companies and police departments can use it any way they see fit. The technology companies won’t self-regulate. Amazon shareholders recently voted down proposals to stop selling the company’s technology to government agencies until it could be determined it did not violate people’s civil rights. Another proposal calling for an independent study was also voted down.
Congress should act sooner rather than later as the software becomes more widespread by the day and the potential for abuses grows along with its popularity.