Justice Tech Download: Deep fake fingerprints, voice risk assessments, & facial recognition
Forwarded from a friend? Sign up!
I'm sad to say it's the end of the semester and our lectures have gone quiet. To make up for it, I'm going to double down on news coverage and start including relevant events around the country. As always, feedback, recommendations and tips are welcome.
In the news
We've been hearing more about the perils of "deep fakes", a use of AI to create fake but realistic videos. Welp, it seems that same technology can be used to create fake fingerprints. (Fortune)
Sarah Lageson talks about the the internet of criminal records (IoCR?) with a journalist exploring her own run in with a mugshot extortion scheme. (NPR)
Under a 2017 warrant, the FBI boobytrapped Word documents to find the true IP identity of cyber criminals. This approach, called the Network Investigative Technique (remote hacking), was made easier after Rule 41 of the Federal Rules of Criminal Procedure were changed in late 2016. (Motherboard)
Facial recognition software in China mistook an ad on a bus for a real person and the authorities accused that person of running a red light. (CX Live)
A company calms to have a vocal polygraph that can be used remotely to assess a person's risk. It is already being deployed by government agencies and major tech companies. (Intercept)
Police in the UK want to know who is likely to commit a violent act. The person-based predictive policing model sounds reminiscent of the Chicago PD's (failed) Strategic Subject List. (New Scientist)
While risk assessments grow in popularity, the police in Kent, England--original adopters of place-based predictive policing software in the UK--have decided to discontinue the technology's use for lack of results. (BBC)
Cynthia Conti-Cook, Kristian Lum and others published a framework to understand the three layers of bias in quantitative risk assessments. (Sage)
Brookings is hosting a discussion on facial recognition technology in D.C. on Dec. 6. (Brookings)