NYPD issues policy on facial recognition software after nearly a decade of use

The NYPD has issued its first detailed public policy on how cops use facial recognition to fight crime in the Big Apple — after using the controversial technology for nearly a decade.

“It’s incumbent upon us to get out there and tell our story to inform the public to inform and quite frankly legislators, that this is how we use this technology,” Police Commissioner Dermot Shea told The Post Thursday.

The department has embraced the tech — creating its own dedicated Facial Identification Section staffed with nearly a dozen cops — since it tapped the digital tool for policing in 2011.

Pro-policing experts have lauded the face-scanning software as a boon for law enforcement — with some recent high-profile successes.

The NYPD’s unit identified a mystery man last August who terrorized straphangers with a pair of rice cookers, echoing a 2016 terrorist attack by Chelsea bomber Ahmad Rahami. Just weeks prior, cops helped track down a man accused of trying to rape a woman at knife-point within 24 hours using the tech.

But the NYPD’s new-age tool has faced backlash from privacy advocates over the lack of a comprehensive public policy — and a proposed statewide ban after a few dozen cops were said to be using controversial third-party facial-recognition software on their private phones.

Shea said the majority of the privacy issues stem from misconceptions about the unit — and he believes the new public policy will address those concerns.

“I think we need to always balance public privacy concerns with how we use technologies and I think that’s a healthy discussion,” Shea said.

“The New York City Police Department needs effective tools to keep people safe,” the commissioner added. “I think it’s self-evident that as businesses, private citizens employ the use of cameras more and more… it logically leads to the next question: Well, what are you going to do with those images once you do recover somebody committing the crime?” 

The new policy details a step-by-step process of how FIS investigators document and obtain a photo or video evidence, search for a match, run a background check on the potential hit and submit for peer and supervisor reviews.

The policy also notes that the NYPD’s Intelligence Bureau uses the technology, which has not been previously reported, and is bound by the same policy.

“The facial recognition process does not by itself establish probable cause to arrest or obtain a search warrant, but it may generate investigative leads through a combination of automated biometric comparisons and human analysis,” the policy reads.

The new directive also prohibits the use of any searches against external databases of images unless approved “in a specific case for an articulable reason” by the Chief of Detectives, Deputy Commissioner, or Intelligence and Counterterrorism.

The NYPD had previously passed on using the controversial Clearview AI, which scraped millions of photos from social media, over security concerns.

Source: Read Full Article