Amazon’s Ring Planned Neighborhood “Watch Lists” Built on Facial Recognition

In an interview with The Intercept, Liz O’Sullivan, a privacy policy advocate and technology director at the Surveillance Technology Oversight Project, described Ring’s planned “proactive suspect matching” feature as “the most dangerous implementation of the word ‘proactive’ I’ve ever heard,” and questioned the underlying science behind any such feature. “All the AI attempts I’ve seen that try to detect suspicious behavior with video surveillance are absolute snake oil,” said O’Sullivan, who earlier this year publicly resigned from Clarifai, an AI image-analysis firm, over its work for the Department of Defense.

O’Sullivan explained that “there’s no scientific consensus on a definition of visibly suspicious behavior in biometrics. The important question to ask is, Who gets to decide what suspicious looks like? And the way I’ve seen it attempted in industry, it’s just an approximation.” Any attempt to hybridize humankind’s talents for prejudice with a computer’s knack for superhuman pattern recognition is going to result in superhuman prejudice, O’Sullivan fears. “In order for society to function well, police have to be impartial; we have to get to a place where they treat people equally under the law, not differently according to whatever way an algorithm ‘thinks’ we look.”

Author
Source

🔖 Articles of Note

🦠 Related Orgs