Applicants with outside-the-box backgrounds have always had to work harder to catch employers’ attention. As companies embrace AI for hiring, one Stanford professor warns, it’s only going to get worse.
In a journal article co-authored with Daniel Elfenbein of Washington University, Adina Sterling, who studies organizational behavior at Stanford’s Graduate School of Business, argues that companies are shutting out transformative applicants by automating their hiring processes.
“It’s becoming much harder to find unusual talent, given that these candidates don’t fit squarely into one category,” Sterling notes. Although a human hiring manager would likely see an improv comedian as a strong candidate for a sales role, for example, an algorithm probably would not.
Why not? Because while most models can filter for certain terms, they can’t consider how unique experiences or traits might map to corporate strategy. Unlike human beings, hiring algorithms can’t consider factors like interdependency, or how one choice might influence others the company faces. They can’t see, for instance, that the improv comedian could help not just the company’s sales team, but also its office culture.
The problem, Sterling explains, is that algorithms cast a wide net but take a “best athlete” approach to cut down the candidate pool. Although tomorrow’s models may be able to evaluate applicants holistically, today’s can’t. “It’s been about filling up the pool with applicants you think you need and separating them from the ones you don’t,” she says.
What does that mean for job seekers? It means that rather than portraying yourself as a unique hire, you should actually make yourself look like part of the pack. Here’s how to do it:
Make It Masculine
Sterling and Elfenbein’s study comes on the heels of Amazon’s attempt to automate its hiring process. Late last year, news broke that the e-commerce giant had been developing algorithms since 2014 to assign candidates scores ranging from one to five stars. Amazon’s aim? To make hiring a hands-off activity.
“Everyone wanted this holy grail,” an individual familiar with the initiative told Reuters. “They literally wanted it to be an engine where I’m going to give you 100 résumés, it will spit out the top five, and we’ll hire those.”
How did Amazon’s algorithm fare? A year in, the company discovered that the model was giving preference to applicants whose résumés used traditionally masculine verbs. Because Amazon trained its tool on résumés it received, its outputs reflected the male dominance of the tech industry. Résumés using the word “women’s” were penalized, while those that used terms like “executed” and “captured” were boosted.
Although outright lying on a résumé is never a good idea, don’t be afraid to tweak yours using a tool like Gender Decoder. This free resource was developed by researchers who published a paper on gendered wording in job advertisements and later turned their list of gender-coded words into a tool to check whether ads might discourage female applicants. Fortunately for job seekers, Gender Decoder can be used just as easily to spot feminine phrasings that might cause an algorithm to reject your application.
Be Specific but Brief
Another great way to get your application turned down by a machine, according to hiring experts? Explaining your skills and experience in vague or superlative terms.
“Put things in the simplest, most straightforward language possible,” ZipRecruiter CEO Ian Siegel told CNBC. “The algorithms are really good at deducing [what] are the key skills for a job.”
If you work in project management, for instance, don’t describe yourself as “a top-notch project manager with experience using multiple industry-leading software programs.” Say something like, “A project manager with 10 years of experience who uses Jira and Asana daily.”
Although research suggests humans can connect emotionally with robots, don’t expect machines to do so with your résumé. At best, AI will pass over terms like “top-notch” and “industry-leading”; at worst, they might cause the model to miss the meat of your application. And because, according to Sterling, skills and experiences are the primary points hiring algorithms consider, they may not see much else to evaluate.
Put on a Pretty Face
Today’s companies aren’t just using AI for résumé review. Increasingly, enterprises like Unilever are turning to it to conduct first-line interviews. HireVue, an AI-based video interviewing tool, claims Unilever has since reduced its time to hire by 90 percent while saving more than 50,000 hours of candidate time.
Some psychologists, however, worry facial analysis tools aren’t yet advanced enough to be part of the hiring process. Paul Ekman, who developed a taxonomy of emotion used since the 1970s for expression analysis, notes that no research to date shows automated systems interpret facial expressions accurately.
“If people know they are being observed, they change their behavior,” Ekman explains. He points out that many people become self-conscious—which interviewing algorithms may interpret as a lack of confidence—when told that their emotions will be analyzed.
Don’t attempt to manipulate interviewing algorithms, Ekman suggests, but do consider what traits and elements the employer is looking for in applicants. If a job description asks for a “go-getter,” highlight your excitement. If it mentions sincerity, avoid extreme expressions like grins or outright frowns.
For better or worse, algorithmic hiring is likely here to stay. Instead of trying to stand out, blend in. That might not be a smart strategy with a human recruiter, but it may be your best bet for winning over a bot.