
We look forward to presenting Transform 2022 in person again on July 19th and virtually from July 20th to 28th. Join us for insightful conversations and exciting networking opportunities. Register today!
A few weeks ago, VentureBeat published an article titled “Why You Shouldn’t Use AI for Hiring,” which claimed that flaws in AI-based hiring tools make them unfair. As someone who has been in the recruiting tech space for two decades, leading research and product innovation at an AI-based hiring platforms company, I want to provide a counterpoint to this story.
The story’s author, CodePath CTO Nathan Esquenazi, presents several key points as to why AI is problematic when making important decisions about people, including:
- AI carries the risk of bias
- Data used to train AI can be biased
- You can assign people to jobs without fancy AI
On these points, the author is dead wrong… err, right, actually. Absolutely correct. But I want to clarify some points about AI in recruitment as it can be very useful in the right contexts.
First, we need to demystify the term “artificial intelligence”. When this phrase first came to prominence in the 1950s, it was referring to a burgeoning effort to create machines that could mimic human problem-solving. In that context it made sense, and in the decades since then it has captured the popular imagination more than perhaps any other scientific concept. That terminator Movie franchises have raked in billions of dollars, and Hollywood’s ideas for ultra-intelligent AI have shaped the careers of countless young engineers working to bring them off the screen and into the real world. As computer scientist Astro Teller says, “AI is the science of making machines do the things they do in the movies.”
Today, the term “AI” refers to a wide range of techniques that process data of different types. Although these techniques originated from the metaphor of a computer that can “think” like a human, they don’t necessarily attempt to emulate the brain’s abilities. So the AI that is transforming our world with self-driving cars, medical image interpretation, and more is really just statistical analysis code. It can make sense of unstructured, complex, and chaotic data that traditional methods like correlation coefficients struggle with. And so, there is nothing particularly “artificial” about most of the AI techniques used, nor could most of them be called “intelligent”.
One of the great and scary aspects of AI is that it allows researchers to examine massive amounts of complex data and extract predictive aspects of that data for use in various applications. This is what your fancy self-driving car does, and also what the attitude-based AI can do. The dangerous thing is that people often do not fully understand which factors the AI weighs in its predictions. So if the dataset is skewed, it can and probably will be replicated at scale.
And here’s the thing: bias is everywhere. It’s a ubiquitous and insidious aspect of our world, and large datasets used to build AI reflect that. But while poorly engineered AI can unwittingly reinforce bias, the downside is that AI also uncovers bias. And once we know it’s there, we can control it. (See, for example, the excellent documentary Coded Bias.)
In my role at Modern Hire, I work with psychologists and data scientists who examine candidate data to find ways to improve the ‘Four E’s of Hire’: efficiency, effectiveness, engagement and ethics. Essentially, any hiring process should save time, predict job/organization performance and retention, be engaging for candidates and recruiters, and be fair to all parties. With traditional pre-AI statistics, we could easily score numeric data like assessment responses, but we couldn’t do the same for unstructured data like resumes, background checks, typed responses, and interviews. Today, however, advanced AI techniques are allowing researchers to analyze and score these types of data sources, and it’s game-changing.
We can now use AI to quantify qualitative data sources like interview responses. And once you can quantify something, you can see if it’s predicting important outcomes like job and company performance — and you can also check if those predictions are skewed against protected or other groups. Non-technology interviews have long been biased; We humans are effectively bias machines with all sorts of cognitive biases that help us evaluate and quickly interpret the vast amount of information our bodies take in every second. Traditional interviews are nothing more than date nights where the interviewer chats with the respondent and builds up a very unscientific impression of that person. But with AI, we can actually automatically score interview responses and statistically evaluate those numerical results.
At Modern Hire, we’ve built a feature called Automated Interview Scoring (AIS) that does just that. It is important to understand that we do not judge or rate how a person looks or sounds. These data sources are full of bias and irrelevant information. Our evaluation begins by using only the transcribed words that a candidate speaks, as the candidate provides us with this content for use in the hiring process. Our philosophy is that only data that candidates knowingly provide to us for use in making decisions should be evaluated. Additionally, we provide candidates with a clear AI consent message to opt out of AI assessment.
In the large data samples we examined with AIS, we found that it can replicate the interview ratings of trained, expert interviewers. This is exciting because it happens immediately. But what about prejudice? Are these AIS scores biased towards protected classes? In fact, our data has shown that the results generated by AIS are almost four times lower than the results from our trained subject matter experts. In this way, AIS reduces time and effort, replicates human assessments, and does it all with significantly less bias.
This article is far from advocating AI being used indiscriminately in the hiring process. If anything, it’s less of a refutation of the original article and more of an extension. A hammer is a tool used to demolish or build a house. AI is also a powerful tool and if applied in a thoughtful, careful, rigorous and scientific way, it can lead to major improvements in hiring technology. But we always have to be very careful that the solutions we create help not only organizations but also individuals. As a psychologist, I want to use technology tools to improve the attitudes of people, not just companies. And in that regard, we’ve never had technology as useful as AI.
Eric Sydell, EVP of Innovation at AI-based hiring platform company Modern Hire, where he oversees all research and product innovation initiatives. He is a work organization psychologist, entrepreneur and consultant with more than two decades of experience in the recruitment technology and human resources industry. He is also co-author of the new book Decode talent: How AI and Big Data can solve your company’s people puzzlepublished by Fast Company Press.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.
If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers