Battling Bias

How to combat human bias unwittingly introduced into artificial intelligence

Artificial intelligence (AI): It’s the technology behind personal assistants like Alexa and Siri, and self-driving cars. It also powers chatbots and predictive technology (think Netflix and Amazon recommendations). Despite the technology’s controversial threat to the human workforce, it brings undeniable benefits, such as advancements in production line automation, customer satisfaction monitoring and production management work.1

But there’s an often-overlooked element of AI: bias. AI algorithms use data to make decisions, and if those data are flawed, the AI’s outputs also will be flawed. So what biases can make their way into these data, and what are the implications of biased AI?

Types of biases

It’s well-documented that we—as humans—are unconsciously biased. According to authors Jake Silberg and James Manyika, AI has the potential to help reduce that bias. Machine learning algorithms can improve decision making, thus making it fairer. They also use training data to “learn to consider only the variables that improve their predictive accuracy.”2

At the same time, AI can perpetuate social biases. Two areas in which AI bias is most prevalent are employee hiring and the legal system.3


A study published in the American Economic Review looked at race in the labor market. Researchers sent identical fictitious resumes to organizations in Boston and Chicago. They found that fictitious applicants with white-sounding names were called back for interviews 50% more frequently than applicants with African-American sounding names.4

AI technologist Kriti Sharma said these biases can be exacerbated when AI is used to help hiring managers find the ideal candidate. As biased data is fed into an AI application, the technology incorrectly learns what types of people make better candidates for certain jobs. Sharma outlined the following example of a hiring manager in search of a new programmer:

“So far, the manager has been hiring mostly men. So the AI learns men are more likely to be programmers than women. And it’s a very short leap from there to: Men make better programmers than women. We have reinforced our own bias into the AI. And now, it’s screening out female candidates.”5

According to Rumman Chowdhury, an AI expert at Accenture, the issue persists even with perfect data and a perfect model.

“You use all of your historical data to train a model on who should be hired and why,” Chowdhury said. “Then you parse their resume or look at people’s faces while they’re interviewing. But you’re assuming that the only reason people are hired and promoted is pure meritocracy, and we actually know that not to be true. So, in this case, there’s nothing wrong with the data, and there’s nothing wrong with the model, what’s wrong is that ingrained biases in society have led to unequal outcomes in the workplace, and that isn’t something you can fix with an algorithm.”6

Legal system

In some states, judges are using risk assessment scores to help determine the prison sentences they hand down to defendants. Using AI, a computer program calculates the likelihood of whether convicted criminals will reoffend—a higher score means a higher likelihood of the person reoffending.

A 2016 ProPublica investigation, however, found the algorithms behind these scores to be biased against people of color. The algorithm rated black defendants as high risk more often than white defendants and predicted they would reoffend at almost twice the rate of white defendants. In reality, the algorithm was about as accurate as a coin flip.7

This, again, is likely due to biased data—evidence shows that African-Americans are disproportionately targeted in policing.8

Law enforcement officers are using a type of AI called facial recognition to identify people during police stops and to track criminals. Facial recognition is a biometric AI-based technology that can identify a person by analyzing his or her distinguishable facial features—in person, or from a digital image or video.9

One aspect of facial recognition is analysis systems, which are used to identify characteristics such as race and gender. The accuracy of analysis systems is somewhat controversial, particularly as it’s used in the legal system.10

People have too much trust in the accuracy and impartiality of these systems, said Karen Hao, an AI reporter for MIT Technology Review.

“The logic goes that airport security staff can get tired and police can misjudge suspects, but a well-trained AI system should be able to consistently identify or categorize any image of a face,” Hao said. “But in practice, research has repeatedly shown that these systems deal with some demographic groups much more inaccurately than others.”11

The system is better at identifying white males than women and people of color, partially because the data sets used to train the algorithms are more robust for white men. For example, certain gender classification systems had a 34.4% higher error rate for dark-skinned women than light-skinned men.12

“[W]hile many of them are supposedly tested for fairness, those tests don’t check performance on a wide enough range of faces ...,” Hao said. “These disparities perpetuate and further entrench existing injustices and lead to consequences that only worsen as the stakes get higher.”13

Amazon’s facial recognition software found itself in the line of fire when, during a test, it incorrectly matched California state legislators to mugshots. More than half of the false positives were people of color. Amazon’s system also falsely matched members of Congress to mugshots.14

Due to these inaccuracies, the American Civil Liberties Union (ACLU) of Northern California is supporting a bill that would prohibit the technology from being used with police body cameras.

“Facial recognition-enabled police body cameras would be a disaster for communities and their civil rights, regardless of the technology’s accuracy," said Matt Cagle, an ACLU attorney. "Even if this technology was accurate, which it is not, face recognition-enabled body cameras would facilitate massive violations of Californians’ civil rights.”15

The ACLU performed both tests using Amazon’s default match confidence threshold of 80%, but according to Amazon, it recommends law enforcement use a 99% threshold.

Sources of bias

According to Silberg and Manyika, biases typically come from the data used to train AI systems, not the algorithms themselves. “Models may be trained on data containing human decisions or on data that reflect second-order effects of societal or historical inequities,” they said.16

Two common ways biases make their way into data are:

  • How the data are collected or selected. If a particular data set is oversampled, for example, the data fed into the AI system will be unrepresentative of reality. This is at the heart of the facial recognition software issues discussed earlier.
  • The data reflect prejudices that exist in society. Take, for instance, Sharma’s earlier example of a manager looking to hire a new programmer. Historically, the position was filled by a man, so the AI system thinks the best candidate for the job is a man, consequently filtering out female applicants.17-19

What’s being done about it

Fixing bias in AI is no small task, but some are taking a stab at it.

Chowdhury, for example, has offered a three-step process for reducing the risk of spreading societal biases:

  1. Ensure algorithm coding isn’t perpetuating bias.
  2. Think about how to use AI to combat biased data.
  3. “Make sure our house is in order—we can’t expect an AI algorithm that has been trained on data that comes from society to be better than society—unless we’ve explicitly designed it to be,” Sharma wrote.20

According to IBM, AI systems must be developed and trained using unbiased data and easily understood algorithms,21 so IBM’s researchers are creating automated bias-detection algorithms that evaluate the consistency of the AI’s decision making.

“If there is a difference in the solution chosen [for] two different problems, despite the fundamentals of each situation being similar, then there may be bias for or against some of the non-fundamental variables,” said author Bernard Marr. “In human terms, this could emerge as racism, xenophobia, sexism or ageism.”22

Everyone from governance groups to data scientists must be involved to ensure a balance between innovation and oversight.

“[B]uilding a culture of reporting and accountability throughout an organization means there will be a far greater chance to spot and halt bias in data, algorithms or systems before it is perpetuated and becomes harmful,” Marr said.23

—compiled by Lindsay Pietenpol, assistant editor


  1. Satya Ramaswamy, “How Companies Are Already Using AI,” Harvard Business Review, April 14, 2017, https://hbr.org/2017/04/how-companies-are-already-using-ai.
  2. Jake Silberg and James Manyika, “Tackling Bias in Artificial Intelligence (and in Humans),” McKinsey & Co., June 2019, www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans.
  3. Karen Hao, “This Is How AI Bias Really Happens—and Why it’s so Hard to Fix,” MIT Technology Review, Feb. 4, 2019, www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix.
  4. Marianne Bertrand and Sendhil Mullainathan, “Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination,” American Economic Association, Vol. 94, No. 4, September 2004, pp. 991-1,013, www.aeaweb.org/articles?id=10.1257/0002828042002561.
  5. Kriti Sharma, “How to Keep Human Bias Out of AI,” TED talk presentation, www.ted.com/talks/kriti_sharma_how_to_keep_human_biases_out_of_ai/transcript?language=en.
  6. Bernard Marr, “Artificial Intelligence Has a Problem With Bias, Here’s How to Tackle It,” Forbes, Jan. 29, 2019, www.forbes.com/sites/bernardmarr/2019/01/29/3-steps-to-tackle-the-problem-of-bias-in-artificial-intelligence/#5e5896ac7a12.
  7. Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, “Machine Bias,” ProPublica, May 23, 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  8. John Villasenor, “Artificial Intelligence and Bias: Four Key Challenges,” Brookings, Jan. 3, 2019, www.brookings.edu/blog/techtank/2019/01/03/artificial-intelligence-and-bias-four-key-challenges.
  9. Bernar Marr, “Facial Recognition Technology: Here Are the Important Pros and Cons,” Forbes, Aug. 19, 2019, www.forbes.com/sites/bernardmarr/2019/08/19/facial-recognition-technology-here-are-the-important-pros-and-cons/#5676633414d1.
  10. Karen Hao, “Making Face Recognition Less Biased Doesn’t Make it Less Scary,” MIT Technology Review, Jan. 29, 2019, www.technologyreview.com/s/612846/making-face-recognition-less-biased-doesnt-make-it-less-scary.
  11. Ibid.
  12. Ibid.
  13. Hao, “This Is How AI Bias Really Happens—and Why it’s so Hard to Fix,” see reference 3.
  14. Steven Melendez, “Amazon’s Face-Recognition Tool Falsely Matched California Lawmakers to Mugshots, ACLU Says,” Fast Company, Aug. 14, 2019, www.fastcompany.com/90389905/aclu-amazon-face-recognition-falsely-matched-ca-lawmakers.
  15. Ibid.
  16. Silberg, “Tackling Bias in Artificial Intelligence (and in Humans),” see reference 2.
  17. Hao, “This Is How AI Bias Really Happens—and Why It’s So Hard to Fix,” see reference 3.
  18. Silberg, “Tackling Bias in Artificial Intelligence (and in Humans),” see reference 2.
  19. Sharma, “How to Keep Human Bias Out of AI,” see reference 5.
  20. Marr, “Artificial Intelligence Has a Problem With Bias, Here’s How to Tackle It,” see reference 6.
  21. IBM Research, www.research.ibm.com/5-in-5/ai-and-bias.
  22. Marr, “Artificial Intelligence Has a Problem With Bias, Here’s How to Tackle It,” see reference 6.
  23. Ibid.

News Briefs

Patricia Aubel and Kayla Reiman have been named recipients of this year’s ASQ Statistics Division’s Ellis R. Ott Scholarship. Each will receive $7,500 scholarships for their educational pursuits. Aubel is studying for a master’s degree in statistics at the University of California, while Reiman is studying data analytics applications to public policy at Carnegie Mellon University in Pittsburgh. Applications for next year’s scholarship will be accepted Jan. 1–April 1. Visit asq.org/statistics/about/awards-statistics.html for more information and to download an application form.

The July 2019 issues of the Journal of Quality Technology (JQT) and Quality Management Journal (QMJ) have been posted on asq.org/pub. In addition, ASQ members—professional and above—now can purchase a hard-copy version of each journal that contains all the articles from the entire year in one volume. Order by Oct. 31 to reserve your copy for $45 each. The annual hard copies of QMJ and JQT for 2019 will be mailed at the end of the year. Call ASQ Customer Care at 1-800-248-1946 to place your order.

Organizations advancing in the 2019 Baldrige Award selection process were recently informed whether they were receiving site visits from examiner teams, a key step in determining this year’s finalists. From the extensive applications and site visits, award recipients will be selected for the high honor in November. A total of 26 organizations applied for the award this year, including 16 healthcare organizations, five nonprofit organizations, three small businesses, one service organization and one education organization.

Reinaldo Figueiredo, ANAB’s senior director of product, process and services accreditation programs, has been elected to serve on the Asia Pacific Accreditation Cooperation (APAC) executive committee. APAC, the amalgamation of the Asia Pacific Laboratory Accreditation Cooperation and the Pacific Accreditation Cooperation, manages a mutual recognition among accreditation bodies in the Asia Pacific region. 

Getting to know…

Julia Seaman

Current position: Business consultant for healthcare and biotech companies.

Education: Doctorate in pharmaceutical sciences and pharmacogenomics from the University of California, San Francisco.

What was your introduction to quality? My introduction came while working for my doctorate in the lab designing and implementing new assays and equipment.

Is there a teacher who influenced you more than others? Why? My high school chemistry teacher, Mr. Fadden, made learning as fun and interactive as possible, but he required the highest quality work.

Do you have a mentor who has made a difference in your career? I’ve been fortunate to have had formal and informal mentors throughout my career who supported me.

What is the best career advice you ever received? Knowing what you don’t like is as important as knowing what you do like.

Have you had anything published? Through my doctoral research and statistical consulting, I have had many articles published in multiple scientific publications.

What ASQ activities do you participate in? Regular columnist for QP’s Statistics Spotlight.

What activities or achievements outside of ASQ do you think are noteworthy? I rappelled off the Hyatt Regency hotel in San Francisco with my mother as part of a charity event.

Any recent awards or honors? I have received grants from the Hewlett Foundation to study online learning in American school systems.

Personal:  Married.

What are your favorite ways to relax? Traveling to new places. We’re trying to visit every park in the U.S. National Park System.

What books are you currently reading? I’m re-reading Joseph Heller’s Catch-22, and I just started Winesburg, Ohio: A Group of Tales of Ohio Small Town Life by Sherwood Anderson and Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts by Annie Duke. I always recommend Agatha Christie books.

Do you have favorite blogs? “Flowing Data” and “Quantified Self” are great places to learn new things and see novel ways data can be presented and visualized.

What was the last movie you saw? “Arctic,” a movie about a man stranded in the Arctic after an airplane crash. I watched it while flying home from a conference.

Quality quote: Quality is more than a process. It’s a whole approach that is applicable across businesses.

Average Rating


Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ

Featured advertisers