top of page

Algorithmic Discrimination

If you stayed awake during high school, you might remember the concepts of LOGOS, PATHOS, and ETHOS. But did you ever hear about the black sheep of the rhetorical family, KAIROS? KAIROS, in its purest form, is timing. Just like you shouldn’t hit on a widower at a funeral, there are definitely instances where good arguments don’t land because it doesn’t fit into the currency setting or conditions. That’s KAIROS.

WHAT IS KAIROS?

1-16879_transparent-tape-png-vector-tran
masking-tape-one.png

At the IDH, we use the concept of KAIROS to address privacy concerns. How does KAIROS address privacy? That’s a good question.


Remember a major life event that changed the core of who you are. Maybe it was a relationship, maybe it was a book you read, or a piece of information you learned; in any case, this moment changed your life forever. Now, was that experience all you needed to change, or was that the end of a long or short series of moments and events that culminated in that moment? This is the core of the IDH’s work with KAIROS: As narrative-focused beings, humans can only truly be understood if timing, sequence, and situational context are taken into account.

But what happens when someone --or some machine-- tries to analyze humans without taking that human’s narrative into consideration?

floppydisc.png
Shapes.png

Robot overlords happen. And so does Algorithmic Discrimination.
(But also Robot Overlords).

Untitled-4.png
Shapes.png

Though a certain major movie franchise envisioned an earth that couldn’t stop the rise of robot overlords, the IDH has hope that we can stop this mechanized insurrection by calling for humans, rather than algorithms, to analyze other humans.

Not convinced this is a problem? Still sound like science fiction? 
Well, let us introduce you -- gently -- to how algorithmic discrimination
is already becoming a way of life in employment decisions.

(And to the Chinese Social Credit Score system,
which is where America is headed if we don't stop the algorithmic invasion.)

  Why don’t you read this
ProPublica Article about how certain people groups are more likely to be misjudged by predictive policing algorithms, or maybe this article on a Chinese Social Credit Score?


If this hasn’t totally bored you to death, why not take a look at our #NeighborsNotNumbers campaign?

Why (Almost) No One is Fixing Algorithms

First off, most folks don't even understand what an algorithm is. 
Let alone how algorithms are already replacing human judgment
-- in inaccurate, discriminatory, and unconstitutional ways --
that are already screwing up people's X, Y, Z, and T.

Which is why the IDH has been running around with our hair on fire for two years
trying to help diverse communities get on the same page
regarding algorithmic assessment, education, and improvement.

Our partners on this effort have included:
The Anti-Defamation League
Bytes Media
The Kelley School of Business at Indiana University
The Little Earth Native American HUD Community
The Minnesota Department of Health
along with any Minnesota state representative or city council member
who was kind enough to talk to us.

Here's how we describe the inherent problems
with algorithms in our national pilot curriculum
(for high school students)
with the Anti-Defamation League and Bytes Media.

[insert video]





 

Why the Way Folks Are Fighting Algorithms Won't Work

But let's say you are algorithmically woke.

Like these awesome folks at X, Y, and Z.

The problem is
-- as the IDH explained in Nebraska Lawyer this summer --
algorithms are so broken
-- and doing so many bonkers things --
that no one can keep track.

And while the IDH
10,000% supports the national efforts
to fight algorithms for being racist and discriminatory
that is only 1/3 of the problem.

So to help legislators and policy-makers
(because they asked)
quickly analyze and assess
the inaccuracy, bias, and constitutional implications
of any algorithm
the IDH built an academically peer-reviewed method
which we already use to help educate
citizens, policymakers, and programmers.


 


Used by the national partners

Developed by a cross-disciplinary team
of professors, practicing attorneys, and educators

And academically certified
by the nerds that matter

(with a law review article out for publication this month)

IDH has developed a teachable, objective, and non-partisan
model for any
citizen, city, or company
to quickly assess
the ethical and legal issues
involved with any given algorithm

(in a language that is usable to programmers
understandable to citizens
and legally actionable by lawyers and legislators)

Each of our current
algorithmic justice campaigns
uses this method to X and Y.

[hyperlinks]

We are currently writing up a "white paper"
version for X and Y
to be delivered in October 2020.

The nerdy version --
with all the back end legal research --
is coming soon
and we'll always take more (academic) feedback.
(Email us.)

And here
you can see how we are teaching
this method to college and high school students
around the country.

 

The IDH's Method for Assessing Algorithmic Unreliability

Why Everyone -- And Especially Christians -- Need to Fight for Our Algorithmic Rights: Privacy, Due Process, Free Speech, and Equal Protection

bottom of page