DETROIT (WXYZ) — Two years after the City of Detroit purchased facial recognition technology, the Board of Police Commissioners approved a directive Thursday (8-3) that will govern how the technology is used.
The directive, which went through a series of revisions this summer, contains a number of provisions aiming to quell community fears around privacy encroachment and inaccuracy. Included within the passed directive are prohibitions on the use of the technology on live-feed videos; rules limiting the use of software to investigations revolving around violent crimes and home invasions; and a ban barring use for immigration enforcement.
Yet, despite these steps, opposition and distrust of the technology still remains high — a residual effect stemming from the fact that procedures and explanations for how the technology has been used over the past two years has been at times obscured and contradictory.
“The public policy that you have passed and recommended is a cover up, not just a rubber stamp, but a cover up for two-years of impunity, of secrecy by the mayor the chief of police on this matter,” said Rev. Bill Wylie-Kellermann during public comment following the vote Thursday.
Police Chief James Craig maintains that the department has ethically and transparently used the software over the course of the past two years — even without a commissioner approved directive — and never attempted to obscure use of the technology.
“When we purchased the software in 2017 we never did it in secret,” Craig told WXYZ this week. “It was — we went before council, we talked about the software, members of my staff clearly explained how the software was being used.”
While the purchase was never hidden, attempting to understand how the technology has been used over the past two years is still difficult. A WXYZ deep dive into the last two years of use highlights changing policies, and an information gap: as recently as April 2019, journalists and researchers requesting standard operating procedures were told the department’s facial recognition policies were still being developed.
The end result is a murky roll-out that still, even with the approved directive, leaves questions unanswered.
"The new policies are much more restrictive than the other policies," said Eric Williams, an attorney with the Detroit Justice Center, who is also working with ACLU-Michigan on a committee opposing the city's surveillance tactics. "DPD said those were just fine. Now they are not? Why should we trust this? The process was as flawed as the technology that is being implemented."
Changing Policies Over the Last Two Years
Facial recognition technology was first purchased by the City of Detroit in July 2017. The three-years, $1,045,843 contract was with South Carolina-based company DataWorks who pitched their product FACE Watch Plus.
“FACE Watch Plus tracks face images from live video surveillance, processes the images, then searches your database and alerts you when a match/hit has been made,” DataWork’s proposal explained. “It detects faces within surveillance footage in real-time, then uses cutting-edge facial searching algorithms to rapidly search through your agency’s mugshots or watchlist database for positive matches.”
But conversations about the potential use of the technology began earlier, as the Detroit Police Department began to build out its constellation of real-time cameras as part of Project Green Light in 2016.
“We’ve got facial recognition coming next,” Mayor Mike Duggan said at January 2016 press conference detailing the start of Project Green Light.
“We’re going to be able to match outstanding warrants against these cameras,” he continued, as an image of a man walking into a Green Light monitored gas station played on the screen.
The Mayor has since said he does not support facial recognition technology on live-feeds and only in conjuction with violent crimes. The rules that dictated the program for the past two-years, however, state otherwise.
A Standard Operating Procedure from April 2019 stated that the police department could “connect the face recognition system to any interface that performs live video, including cameras, drone footage, and body-worn cameras. The face recognition system may be configured to conduct face recognition analysis on live or recorded video.”
That same month Chief James Craig, who also now maintains the software has never and would never be used on live footage, responded to city council questions around the technology and larger surveillance network.
“For clarification, our policy regarding the use of facial recognition allows for the use of facial recognition when officers have reasonable suspicion that a crime has occurred. This is not limited to only violent crime,” an April 3, 2019 letter signed by Craig stated.
When questioned about this statement this week Craig said he was not familiar with the response.
“I don’t have personal recollection of that,” he said in an interview with WXYZ, “however it has been my position, and it continues to be my position that we will only use the still photos involving violent crime, with the exception of a home invasion one.”
The directive that passed does limit usage to violent crimes or home invasions, but who it will be used on still remains broad.
“Facial Recognition shall only be used when there is reasonable suspicion that such use will provide information relevant to an active or ongoing Part 1 Violent Crime investigation or Home Invasion I investigation,” the passed directive states.
The vague language has raised red flags for those who have been following the situation in Detroit.
“Law enforcement agencies use facial recognition not just to search for suspects but for witnesses, victims and also people just associated in some way with an investigation,” said Clare Garvie, a senior associate at Georgetown’s Law Center on Privacy & Technology who co-authored a paper last spring detailing Detroit’s purchase of facial recognition software and the lack of accountability and community input around the process.
While Garvie sees Thursday’s more streamlined directive as a step in the right direction questions over the efficacy of technology has been still remain.
After using the technology more than 600 times the department says they still can’t tell us how many arrests it’s led to.
“That’s a good question, I don’t have that answer. We don’t know. I think we’re trying to look into that,” Craig told a gaggle of journalists this summer when asked about how many times a “credible match” — two analysts and a supervisor agreeing on the same match with the technology — resulted in an arrest.
When asked Wednesday for this information Craig was unable to give a number.
Contact investigative producer Allie Gross at email@example.com or at (248) 827-9455 and 7 Investigator Ross Jones at firstname.lastname@example.org or at (248) 827-9466.