#NEIGHBORSNOTNUMBERS

Facial Recognition and Policing

Facial Recognition
is already happening.

And we already know that it's inaccurate, racially biased,
and probably unconstitutional.

Yes, we know: Facial Recognition technology sounds like a bad episode of Black Mirror. 

 

But police departments around the country are already using it. 

 

And since legal scholars have already -- extensively --  proven that police use of facial recognition is inaccurate and discriminatory (and plenty of lawyers and judges are worried that facial recognition violates privacy and free speech rights)

it's no surprise that advocates from the left and the right (and major figures like IBM and the Pope)

have called for its regulation (or banning it outright).

So why is the public still asleep on the fight against facial recognition?And why haven't programmers and policy makers fixed it?

The IDH has some ideas.  And solutions.

(1)
There are so many legal and ethical problems with facial recognition that the public (and policymakers) can't keep track.

(Which is why the IDH created a method to categorize them that even high school students understand.)

(2)
While facial recognition technology's proven tendency to racially discriminate is a disqualifying problem, it isn't the only problem.
(Which is why the IDH method also helps to recognize all constitutional issues
with police use of facial recognition software.)

 

(3)
The reform or reprogramming of algorithms
requires bi-partisan and cross-cultural coalitions.

 

Check us out on social media

  • Facebook
  • Twitter
  • Instagram