Last year, speaking to The Guardian, Tess Posner, CEO of AI4ALL, an organization that strives to increase diversity within artificial intelligence, concluded that we have reached a “tipping point” in terms of the diversity crisis underlying AI. With every passing day, it gets more challenging to mitigate the biases that are powering AI tools and systems. Now more than ever, awareness and consideration of these biases need to be brought to the forefront of all AI development and implementation.
In order to tackle AI biases, it’s important to consider the multitude of different types of biases that exist and drive to combat each of them.
Racial and culture biases
Racial and ethnic biases pervade AI tools. In one disturbing example, Amazon’s facial recognition software falsely labeled 28 members of Congress as criminals. The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus—one of whom was civil rights legend Rep John Lewis. Another study evaluated three commercial gender classification systems and found that darker-skinned females were much more frequently misclassified (error rates of up to 35%) as compared to lighter-skinned males, who were misclassified far less frequently (error rates of up to 0.8%). The study notes that “The substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms.”
By no means is racial and cultural diversity in the context of AI tools limited to face recognition software. Consider, for example, the case of a South Korean woman who needed to be rescued by firefighters after a robot vacuum cleaner sucked up her hair while she was sleeping on the floor. The designers of the technology had failed to consider that it is common to sleep on the floor in some cultures.
Racial and cultural diversity has been shown to improve the financial prospects of a company. It’s likely that these positive performance boons are only augmented when they extend to the development of AI. According to research by McKinsey, companies in the top-quartile for ethnic/cultural diversity on executive teams are 33% more likely to have industry-leading profitability.
Regardless of how genuine their intentions, designers of AI tools transfer their own bias into the technology they build. One of the most pervasive types of bias afflicting AI development is gender bias. There are countless examples of AI technology being riddled with gender bias. Consider, for example, a 2016 study that found that word embeddings that had been trained on Google News articles revealed gender stereotypes. In one example, the vector analogy, “man is to computer programmer as woman is to x” was completed with x=homemaker.
In part, gender biases are a reflection of a lack of gender diversity in terms of talent. Research by the World Economic Forum and LinkedIn has found that only 22% of jobs in AI are held by women, with even fewer holding senior roles. One of the most effective ways to mitigate gender biases from AI tools is to prioritize achieving gender parity in AI talent. This isn’t just morally the right thing to do—it also makes financial sense. McKinsey’s research also found that gender diversity is associated with positive firm performance; companies in the top-quartile for gender diversity on executive teams are 21% more likely to outperform on profitability and 27% more likely to have superior value creation.
Data is AI’s oil. When the data used to develop algorithms and train AI is biased, these biases will only be reinforced by AI tools after adoption and implementation. Consider, for example, the example above of the word embedding that associated females with homemakers. If these types of associations are built into AI tools, they will be perpetuated, regardless of how diverse the team behind the AI development is.
When considering dataset bias, it’s not enough to just consider gender and racial biases. For companies building AI solutions that aim to glean insight into, and provide recommendations to, customers, it’s especially important to consider whether the data used to train machine learning and AI algorithms represent its customer bases. Does your training dataset include everyone in your customer base? If not, what biases might you be unknowingly building into your AI tools and systems?
Oftentimes, AI tools are built in a silo by people who aren’t the end-users of the technology. AI design should be a cross-organizational endeavor. At a minimum, different departments and divisions should have input into how algorithms are designed and, especially, which assumptions underlie these algorithms. Several forward-thinking organizations have recognized the value of prioritizing departmental and divisional diversity. Consider, for example, LivePerson, which develops conversational commerce and AI software. As reported by MIT Sloan, LivePerson places its customer service staff alongside its coders during the development process so as to benefit from a diversity of perspectives.
Builders of AI tools should be seen as partners, not siloed entities. When you leverage and harness cross-departmental and divisional diversity—including HR, Sales, Marketing, Finance, and IT when building AI tools, you’ll see measurable improvement in your ability to combat bias and develop well-designed AI tools.
Gender, racial, cultural, departmental, and dataset biases by no means constitute the full gamut of AI biases. Scientists at the Biogerontology Research Foundation, for example, developed an “Aging AI”, which used deep neural networks trained on blood tests of relatively healthy patients. Because many age-related diseases are symptomatically related, very few healthy patients above the age of 60 could be used to develop the system. When testing of the system commenced, it was revealed that age discrimination pervaded the tool.
The importance of mitigating bias in AI tools cannot be overstated. This should be a priority for all businesses, large and small, in every industry. At a minimum, companies should follow the lead of Microsoft, which has established a “Fairness, Accountability, Transparency, and Ethics” (FATE) AI team to uncover biases in Microsoft’s AI systems. This type of oversight body is critical to ensuring biases don’t pervade the tools that we’ll inevitably rely on in the near future. As we build AI tools, it’s worthwhile to consider a key question (one that FATE is also exploring): “As our world moves toward relying on intelligent agents, how can we create a system that individuals and communities can trust?”