In late 2018, Google made the decision to prevent its “Smart Compose”—a feature that predicts what users intend to write in emails and auto-completes sentences—from suggesting gender-based pronouns such as “him”, her”, and “their”. Why? Google feared that the tool was biased and would be prone to suggesting incorrect gender-based pronouns. The change was prompted by a Google research scientist who typed “I am meeting an investor next week” and witnessed Smart Compose suggest the following follow-up question: “Do you want to meet him?” The problem? The investor in question was female.
Bias is AI’s Achilles’ heel. Research by DataRobot has found that nearly half (42%) of AI professionals in the US and UK are “very” or “extremely” concerned about AI bias. AI can linger in the most unexpected places. No system is immune. As AI secures its stronghold in the workplace, it’s essential to be on the lookout for places and platforms where bias may linger.
AI tools promise to minimize bias from hiring. According to research by LinkedIn, 43% of recruiters and hiring managers say a key benefit of AI is its potential to remove human bias. Yet the same AI tools that promise to minimize bias can also infuse bias into the hiring process. Take Google’s job ads algorithm, for example, which disproportionately displayed high-paying jobs to men. As well, many AI-powered hiring tools rely on facial recognition technology. Research has shown that even top-performing facial recognition systems misidentify black faces at rates five to 10 times higher than white faces.
While eliminating bias from AI-powered hiring tools is difficult, there are many steps we can take in the right direction. It starts with oversight. The team responsible for building, implementing, and using hiring tools must be diverse, otherwise their own biases will be baked into the tool. Second, data must be scrutinized. Too many companies rely on AI-powered hiring tools that are largely based on historical data. Since many companies have given historical preference certain classes of individuals, (for example, white males and gradates of top-tier universities), these individuals are also advantaged by algorithms trained on historical data. These and similar associations must be scrutinized and removed before they are built into AI tools. Finally, all AI-powered hiring tools should be subject to internal and external audits. Companies like HireVue, which creates AI-based hiring tools, have committed to partaking in both internal and external audits to pinpoint where bias may exist in its tools. This is a step in the right direction.
Workplace collaboration tools such as Slack have become ubiquitous in the workplace. Recently, several companies have started to develop and launch AI-based tools to help managers and leaders understand the communication that is happening on email, Slack, and similar platforms. Vibe, for example, is a Tokyo-based company that searches public Slack messages and leverages AI to assess employee satisfaction. It’s able to do this by scanning messages for keywords and emojis that represent an employee’s state of mind, according to five emotions: happiness, irritation, disapproval, disappointment, and stress.
The problem is that many of these AI-powered tools don’t take into account the fact that individuals have different emotional and other predispositions. Males and females, for example, act differently on workplace collaboration tools. Leah Fessler, a reporter, at Quartz, has noted that “Your company’s Slack is probably sexist”. She explains that males tend to dominate public-channel Slack conversations, while women are more likely to use supportive and friendly punctuation.
Slack itself is increasingly gravitating towards artificial intelligence. In particular, it has created a “work graph” that analyzes how users are interrelated. The concept of the “work graph” is not new. Google has built its own “knowledge graph” and Facebook its own “social graph”. The work graph allows Slack to enable faster and more accurate searches within the Slack platform, while also allowing users to identify which messages matter most. According to MIT Technology Review, it “aims to become a ruthlessly organized, multitasking assistant who knows everything that’s going on and keeps you briefed on only the most salient events.”
Given that the average user sends 70 messages per day, it’s all but inevitable that many businesses will rely on AI to better understand their workers’ collaboration on Slack and similar tools. It’s critical that we be vigilant about understanding how the models underlying these tools are trained and how they are susceptible to bias.
AI tools are quickly infiltrating sales organizations. One of the most promising use cases is lead scoring. According to Gleanster Research, half of leads are qualified, but not yet ready to buy. Some AI tools allow companies to mine all the various channels they rely on to connect with customers and extract demographic, firmographic, and technographic information and, ultimately, determine the quality of leads. These tools can even pinpoint consumers’ sentiments to predict their propensity to buy.
Just like all AI tools, it’s important to understand how the models underlying AI-powered sales tools are built. Most tools aim to uncover the “low hanging fruit” —leads that are easiest to close and have the biggest budget. Rarely do these tools give the big picture. For example, they don’t often account for which leads are likely to have the highest lifetime value, or which are likely to give the highest net promoter score.
Another problem is that many companies conflate lead generation with lead scoring. That is, they assume that customers that fit their ideal customer personas are their ideal customer. They don’t consider lead readiness, and, specifically, whether leaders are ready to buy now. A third problem is that AI-powered sales tools are often biased by historical sales information. They give preference to customers that look like past customers and, in doing so, fail to account for changing market dynamics and customers that haven’t purchased in the past but may be stellar customers in the future with increased outreach and advertising.
According to Harvard Business Review, companies that use AI for sales are able to increase their leads by 50%. AI helps eliminate the guesswork and enables sales reps to focus their time on establishing strong relationships with customers. The key to unlocking the potential of these tools is to recognize where bias exists and take proactive steps to minimize it.
AI has been dubbed the greatest innovation since the steam engine. As AI becomes a more integral part of our workplaces, we must constantly challenge and scrutinize the tools’ underlying assumptions. By pinpoint where AI bias may be lurking, AI will empower us, rather than constrain us.