From Body Language to Microexpressions

At one time it was thought that knowledge of body language was the height of sophistication in the psychology of job interviewing. An applicant might present impressive credentials, might sound like the perfect person for the job and dressed perfectly for success, but his body language could still give him away. A mere crossing of the legs or arms, say, could betray an inhibited, fearful personality; not a good hire.

The ability of employers to learn body language did little to raise the odds of a good hire in their favor, though. Not enough was revealed by posture or movement, or it was not so easy to interpret, or applicants learned the language and behaved accordingly.

The quest for tools to select the most likely to succeed thus continued, and now some of the biggest names on Wall Street are experimenting with artificial intelligence. Goldman Sachs Group, Morgan Stanley, Citigroup and UBS Group AG are hoping that AI will help them in divining the best candidates for their firms.

It’s not the first time artificial intelligence software has been used in the hiring process. Large companies have for some time been finding AI helpful in sorting out the best resumés from among the mountain of applications. Software programs can determine which resumés are worth looking at in a small fraction of the time it would take for a mere human being.

What’s new is the claim that artificial intelligence can identify such traits as teamwork, integrity and judgment, and whether the applicant will fit into the corporate culture. These are some of the intangibles that often make the difference between a successful hire and an unsuccessful one.

And that’s crucial. In the scramble for the best, most talented people, these corporations are looking for a competitive edge in recruitment. Advocates of technology in hiring argue that the conventional means for evaluating a candidate have been found wanting.

Google’s SVP of “people operations” Laszlo Bock put it rather more vehemently. “One of the things we’ve seen from all our data crunching is that GPAs [grade point averages] are worthless as a criteria for hiring, and test scores are worthless,” said Bock.

Even if AI can’t guarantee the best, it is hoped that it will at least help them screen out the wrong people, the ones who just don’t work out, in spite of every measurable qualification and the most probing job interviews. Those mistakes can cost as much as three times an employee’s salary in terms of lost business opportunities, according to an estimate by Capital One Financial Corp. Not to mention the personal grief all around when a disastrous hire is made and the person has to go.

However, the trend may reflect a naïve faith in the power of technology to solve all problems. The desire to subjugate that most unpredictable of phenomena, human behavior, to the rule of the algorithm is perhaps understandable, but it’s unlikely that it will reduce the imponderables by more than a little, if at all.

Some warn that the use of artificial intelligence in hiring has certain built-in problems. For example, the Seattle-based Koru Careers uses what’s called a corporate “fingerprint,” seeking to match the data on job candidates with the data on the client’s own successful employees. But this may only yield more of the same types of people joining the team, though not necessarily to the benefit of the team.

Moreover, as these intangibles supersede the test scores and GPAs, the potential grows for straying into areas better left unexplored.

For example, HireVue specializes in analyzing video interviews for attributes such as engagement, motivation, and empathy. Word choice and speed of speech are among the indicators they look at. They also scrutinize a candidate’s microexpressions (fleeting facial expressions) for clues to what’s really going on. You may be able to expunge undesirable “expressions” from your body language, but they’ll catch you on those microexpressions. Or so they say.

As so often occurs, the data-crunching technology poses privacy issues. On the one hand, companies are anxious to avoid taking on anyone with a violent or racist character. An outfit called Fama searches social media for evidence of an applicant’s undesirable behavior or opinions that would not appear on a resume. Like participation in neo-Nazi or Islamic State websites. As one data mining expert said, “If someone tweeted something racist 3,000 tweets ago, you wouldn’t find it, but a machine could.”

Yes, but it could also be used to search for politically incorrect speech, or pro-Israel sentiments.

Various state laws regulate what an employer can or cannot mine for, including signs of bigotry, violence, profanity and use of illegal substances. The Fair Credit Reporting Act (FCRA) recognizes the right to challenge the accuracy of public data collected about a private individual for such things as employment and credit ratings.

However, the job applicant may have no way of knowing what data about him was considered in the hiring process, and could be vulnerable to the hirer’s prejudices.

Advocates of AI in hiring will acknowledge that algorithms cannot entirely replace human judgment, but argue for it as a valuable tool to add to the corporate toolkit.

As Anthony Onesto, vice president of human resources at Razorfish, said: “We’re still early, and ultimately it’s computers, technology and humans working together.”

That sounds like a reasonable approach. We hope they take it.