Some image recognition programs would not recognize this image as a bride. But not all brides wear white.
(Photo by Julie Johnson, Unsplash)

Bias and discrimination are often used interchangeably. But they are not the same. Bias is a point of view or a preference for one thing over another. I like chocolate chip cookies, but not the ones with nuts. I’m biased against nuts in chocolate chip cookies.

Not all bias is bad. Sometimes you want to exclude something because the question you are asking or the outcome you are seeking requires it. For example, if I want to understand wages and tenure for mid-level women HR professionals, I would exclude men and temps, then figure out how many years’ experience or job level I wanted to see and eliminate the data on everyone outside that range. In order to get the information I want, I have to ask a narrow question that necessarily excludes people by gender and probably age. But this example is a research question, not discrimination.

Discrimination is a legal term that means making an employment decision that adversely affects an individual or group in a protected class. Discrimination is illegal bias that manifests in employment decisions. (Also, housing and in some states business relationships, but let’s stick to employment decisions here.)

Discrimination is not always obvious. It can hide in the way that programs are designed or the rules that are part of how the program works. When I was hiring for a law firm (a long time ago and definitely past the statute of limitations), I would immediately throw away all resumes where the name had a number attached, like Edwin Paul Baker, III. I perceived this type of naming convention as pompous and thought that people who represented themselves that way would be arrogant jerks. Since I already worked with plenty, I didn’t want more.

My rule was technically neutral. I sorted resumes based on whether they had a number in the name – not discriminatory on its face. But it definitely had an adverse impact on men because women do not generally include numbers with their name even if they are named after their mother and grandmother. The naming tradition also probably correlates to mostly white men because it was a common practice in the British Isles between 1700 and 1875 that continues today. (If you are interested in naming traditions around the world, here is a great guide.)

It’s important not to treat bias and discrimination as the same thing. Bias is often useful and necessary. Discrimination is illegal and should be remedied and eliminated in employment decisions.

photo of Heather Bussing on HRExaminer.com in black and white

Heather Bussing, HRExaminer Editorial Advisory Board


Bias is always an issue with Intelligent Tools because the machine learning systems only know what they are taught. And what machine learning systems ‘learn’ is sometimes surprising or just wrong. For example, when I search my images stored in google photos for “baby,” the search results give me a lot of bald men and men with short light colored hair. I seem to have a lot of photos of Josh Bersin. The system appears to correlate light or no hair with age. But it also gives me all the actual baby photos so I can find the one I’m looking for. There’s significant error but it works well enough for my purpose.

Part of the problem is that the people who teach intelligent systems (coders, designers, data scientists, and folks on your team) are not always aware of what they are teaching. Like parenting, training a machine is a case of modeling behavior (using data). The machine will learn what is delivered, not what is intended. Children learn from the behavior of parents, not from parental intent.

Sometimes the bias comes from the point of view of the people creating the system because they programmed it based on their experience and reality. We make decisions based on what we think we know. This is often the hiding place for confirmation bias (our tendency to believe things in spite of the facts). One example of this is a photo recognition program that was based on a bank of photos from the United States did not recognize a bride from India because she was dressed in reds and golds instead of white.

Other times, the systems may have biases that the creators don’t know about or can’t see until the system starts running and producing results. Amazon tried to develop a machine learning system to sort resumes for hiring. They were trying to design a system that would figure out the best qualified candidates for a position by matching resumes to jobs. They developed 500 different data models and trained the system on 50K key terms. But the system continually discriminated against women. Part of the problem was that Amazon used the data it had, its own bank of resumes, jobs, and hiring decisions for the past 10 years. Because the past data and hiring decisions favored men, the machine learned to prefer men over women because that was the pattern it saw in the data.

Those whose data is based on past discrimination are bound to repeat it.

The trouble is that all our data is based on the past. We still have a long way to go before the data will be based on equality for everyone.

In the meantime, there are things we can do to develop and use intelligent tools so that they don’t result in discriminatory employment decisions.

We need to start with data and models that reflect the conditions we want rather than the conditions we have. This means getting lots of people with diverse perspectives involved in creating these tools. It also means constantly asking, what is missing? What is not included that should be here?

In developing and using intelligent tools, we have to carefully test and monitor the outputs for adverse impact and discrimination. Amazon made a hard but correct decision to scrap the system even though they had invested a great deal of time and resources in it.

Then before relying on recommendations from intelligent tools, do a reality check. Does this make sense? Is the suggestion based on limited criteria and are there other things that are more important or that also matter?

It’s impossible to overstate this last bit. The output of a machine, much like its human counterpart, will always be wrong to some extent. When we ask the machine to make intelligent forecasts based on desired end-states, we need to be prepared for unintended consequences.

Just like you don’t drive off the cliff just because your map program tells you to go that way, don’t make hiring decisions solely based on machine analysis.

 

Read the Series

  1. Legal Issues in AI: Data Matters
  2. Legal Issues in AI: Bias and Discrimination
  3. Stay tuned, Heather will have another post in this series soon!


 
Read previous post:
2019-04-15-hrexaminer-Finding-My-Way-Creativity-and-Innovation-photo-img-by-doing-shaw-as-if-to-fly-sq-200px.jpg
Finding My Way : Creativity and Innovation in Context

"Creativity is messy at times, it is unpredictable, and relies on curiosity, patience, persistence and more. If you want to...

Close