The News: Spurned on by the international wave of protests following the recent high-profile deaths of Ahmaud Arbery,George Floyd, Breonna Taylor, and Raychard Brooks at the hands of US police officers, IBM, Microsoft, and amazon have announced that they would temporarily suspend or otherwise limit police departments’ access to their facial recognition technologies, pending the introduction of new national legislation regulating theur use by law enforcement agencies.
Earlier this month, Microsoft President Brad Smith explained that Microsoft would no longer “sell facial recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.” This statement followed Amazon’s the previous day, in which it announced a one-year moratorium on police use of Amazon’s facial Rekognition technology. Both companies’ statements followed IBM’s June 8th announcement that it was getting out of the facial recognition business altogether. In a letter sent to several members of congress, IBM also expressed its opposition to “any technology, including facial recognition technology” used “for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency.”
These decisions are not without potential financial repercussions for the three tech giants, as the global facial recognition market is currently on track to reach $8B by 2022.
Source: IBM, Microsoft And Amazon Not Letting Police Use Their Facial Recognition Technology, Larry Magid, Senior Contributor – Forbes
How IBM, Microsoft and Amazon Tapping the Brakes on Law Enforcement use of their Facial Recognition Tech Opens the Door to Overdue Legislation Regulating its use
Analyst Take: Separating Big Brother uses of Facial Recognition technology from Big Mother uses of Facial Recognition Technology may be a good place to start.
Fears of potential abuse of sophisticated technologies by governments and law enforcement agencies aren’t new. The gray area between “Big Mother” models of surveillance (in which benevolent law enforcement agencies only use mass surveillance, facial recognition, and crime prediction algorithms and tools to locate and identify suspects and immediate threats to public safety) and “Big Brother” models of surveillance (in which law enforcement agencies apply these technology tools to not-so-benevolent purposes) is often razor-thin. With each new generation of these technologies, jurists and legislators are left to ponder where to draw legal lines along the increasingly blurry edges of what constitutes a warrantless search, and decide where the boundaries of reasonable expectations of privacy begin and end. Also increasingly in question, especially as artificial intelligence finds its way into these platforms, is to what degree implicit bias may plague them, and therefore how such bias could ultimately impact the quality of policing that relies on these solutions for intelligence, analysis, and even everyday “best practices.”
Bear in mind that the potential danger of abuse when it comes to these types of technologies ultimately lie more in how they are used, than in the technologies themselves. When used properly, and for just and legitimate purposes, facial recognition can, among other outcomes, prevent a terrorist attack, help law enforcement officers identify and arrest a dangerous suspect, identify and locate victims of human trafficking, provide evidence of wrondoing (or innocence) to a jury and so on. It is only when used improperly, however, that facial recognition technology can be used to conduct warrantless mass surveillance, harass targeted individuals, and even lead to the misidentification and misapprehension of suspects.
Tackling Gender and Racial Bias in Facial Recognition Technologies
To illustrate one key dimension of the problem as it currently stands, let us travel back to just last year, when MIT Media Lab’s Joy Buolamwini released a study of Amazon’s Rekognition software bias in which she found that while Rekognition identified the gender of lighter-skinned men with 100% accuracy, it somehow mistook women for men 19% of the time, and mistook darker-skinned women for men 31% of the time. In an earlier study, Buolamwini noted similar racial and gender biases in IBM’s and Microsoft’s facial analysis software as well (and in Megvii’s software – a Chinese facial recognition company). In an entirely separate test in 2018, the ACLU also demonstrated that Rekognition, after scanning photos of US lawmakers, had erroneously matched 28 members of congress with police mugshots.
While software is bound to improve over time, and Amazon, IBM, and Microsoft have worked hard to improve the accuracy of their technologies, these results highlight just one among dozens of concerns raised by the use of such tools to identify, hunt down, and prosecute potential suspects: namely, that they run the risk of disproportionately misidentifying women and dark-skinned individuals compared to white men. Knowing what we now know about how errors in suspect identification — especially as they pertain to black men and women “fitting the description,” in police parlance — can have dire consequences, it isn’t difficult to see that, when combined with the kinds of disproportionate use of lethal force that original triggered the wave of protests currently sweeping across the United States, the misuse of facial recognition technology by police departments could lead us down a dangerous path. Understandably, no tech company wants to be associated with miscarriages of justice, particularly those that result in controversial police shootings. This may help explain why Microsoft has been lobbying for facial recognition technology to be regulated for some time now, and why Amazon took a similar tack by drafting and pitching facial recognition legislation to US lawmakers.
The Case for National Legislative Action to Establish Facial Recognition Standards and Guardrails for US Law Enforcement
Until now, federal and state legislative bodies across the United States have been reluctant to add the potential dangers of abuse by law enforcement agencies of facial recognition technologies to their respective legislative agendas, defaulting instead to acknowledging the potential benefits of these tools, particularly within the scope of Homeland Security objectives. One could argue that doing so may have been the right approach initially: Apply these tools to Homeland Security and law enforcement missions, learn what works and what doesn’t, and fix problems as they arise. Unfortunately, the “fixing the problems as they arise” part was brushed off again and again, and we appear to have reached an inflection point (nationwide calls for police reform) which now demands that action be taken on this matter. What Amazon, Microsoft and IBM appear to be doing is calling upon Congress to do what they alone cannot: Establish and impose effective and appropriate national legal guardrails for the use of these technologies by law enforcement agencies that will 1) balance the needs of law enforcement missions against Constitutionally-mandated US civil and human rights standards, and 2) fit within these companies’ cultures, or at least protect them from the public backlash that is certain to result from being associated with abuses of power by police. Note that a federal approach is preferred by the tech industry, as it will guarantee the application of a consistent legal standard across all jurisdictions.
Several days after the Amazon and Microsoft announcement, Rep. Eddie Bernice Johnson, (D-Texas) introduced legislation that would require federal agencies to do just that. Per NextGov.com, “the Promoting Fair and Effective Policing Through Research Act would direct the National Institute of Standards and Technology to expand its investigations and standards development e of force in police response, and enable new research into policing policies and practices supported and spearheaded by the National Academies and National Science Foundation.”
Per Rep. Johnson, “the best available evidence reveals increased likelihood of law enforcement officials applying force against individuals who are not white, have disabilities or mental health conditions, are members of the LGBT community, have low incomes—and people who fall in the intersections of such groups. Unfairness and bias within advanced technologies that police increasingly harness have the potential to exacerbate such disparities.” Johnson also suggested that Congress and the nation “must study the influence of technology and big data on vulnerable populations and work to root out any biases.” Rep. Johnson chairs the House Science, Space and Technology Committee.
Another bill, introduced by Rep. Don Beyer (D-Virginia) would ban the “use [of] facial recognition technology or other biometric surveillance systems on any image acquired by body-worn cameras of law enforcement officers” if the cameras were purchased with federal Urban Area Security Initiative Grant monies. The Stop Biometric Surveillance by Law Enforcement Act would also prohibit state and local law enforcement agencies from purchasing body cameras with federal funds that incorporate facial recognition technology. Per Beyer, “facial recognition software and other biometric surveillance tools are not yet accurate enough for deployment in law enforcement settings. Without oversight, this ‘Minority Report’ technology will be ripe for abuse. […] Those who use this technology face no public scrutiny or accountability, and it is hard to determine the extent to which it enables or increases racial profiling. […] Without a strong legal regulatory framework, facial recognition technology and biometric surveillance could lead to a slippery slope of unprecedented mass government surveillance in this country. That must not happen.”
Despite the 2020 elections being just a few months away, expect more legislative efforts of this type to pop up in the US House and Senate in the next few months.
What Comes Next?
Although IBM has, at least for now, decided to exit the facial recognition space altogether, I don’t expect Amazon or Microsoft to do so. For starters, neither company appears to be tapping the brakes on facial recognition use cases outside of law enforcement. In fact, Amazon plans to continue granting access to its facial recognition tech to NCMEC (the National Center for Missing and Exploited Children,) and companies like Marinus Analytics, which help combat human trafficking, so some law enforcement use cases are still getting the green light. Second, their current stance on the use of their facial recognition tools by police is clearly both temporary and predicated on the passing of national legislation that will impose overdue limitations and guidelines for their use by law enforcement agencies and police departments. As soon as Microsoft and Amazon feel confident that their facial recognition technologies will no longer be abused or misused, deliberately or accidentally, by law enforcement entities, facial recognition tools will once again become a staple of 21st century policing in the United States.
Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.
Other insights from Futurum Research:
Image Credit: Security Today
The original version of this article was first published on Futurum Research.
- Data Privacy: Apple Agrees to Delay Enforcement of iOS 14’s New IDFA Opt-in Rules to 2021 - September 9, 2020
- Escalating App Store War Between Apple and Developers Likely to Bleed into Antitrust Probes in US and EU - August 17, 2020
- What Qualcomm’s Landmark Antitrust Victory Against the FTC Really Means for the Mobile Industry, Investors, and Consumers - August 14, 2020