Interview With Steve Levy On Recruiting and AI
Why leadership and talent management should look beyond the temporary hype and consider the entire picture
Discussion timing context: Steve Levy and I spoke in November 2024. All images are either sourced from Pixabay or generated. All discussion content is owned by the participants of the discussion. None of this discussion may be shared with any artificial intelligence without permission from one of the participants. You can reach Steve Levy on LinkedIn, Facebook and X and you can read a full bio on Steve at his about.me link.
We met quite a few years ago on Twitter (now X) and you make many good observations on LinkedIn related to talent. I highly recommend people follow your thoughts. Artificial Intelligence (hereafter referred to as AI) has become the hot topic in every industry. Talent and recruitment have felt AI. Before we talk about AI and this industry, give readers a quick introduction about yourself.
I worked as an engineer in the past and I'm familiar with AI. Most of what builds these AI technologies are not new and have been around for many decades. We do have the ability to store and retain more information just like we have more processing power for this information. In the recruiting and talent industry, there are some assumptions how AI will affect things. For instance, you hear a lot about how AI will help with skill-based hiring. I had a recent individual ask how AI using skill-based hiring would shape the recruitment landscape. Think of it this way: people pass driving tests all the time to show they have the skill of driving. Some driving tests can even be comprehensive. Then a few of the same drivers that pass the test do dumb things like drinking-and-driving, texting-and-driving or something else as dangerous. Underneath these mistakes reflects a lack of empathy as to what can happen with others. Yet they have the skill of driving. The skill assessment of them being able to drive doesn't matter when they end up in a car accident.
Think about this in the context of AI and skills-based hiring where you get the skill, but miss on getting a candidate who is relatable, empathetic, etc. In 2016 - this was 8 years ago - I wrote an article called Artificially Stupid Recruiting (I highly recommend readers read this article - Steve highlights some key points about human experiences and how we relate to others).
In the recruitment and talent industry, what are fears that you've seen related to artificial intelligence?
In general, many people are afraid of things they've seen in movies like the Terminator or other entertainment. We have a regular discussion with our recruiters about their struggles and fears. In fact, I have an upcoming talk called “Courageous Recruiting” because when you think about it, courage is an underlying virtue that anchors other virtues. For instance, we see a fear with AI - like a baby approaching a “visual cliff” (a classic experiment in child psychology). They don’t understand “depth” and fear the cliff. Yet what they don't see because of where they are is that they can get across the cliff (imagine a bridge that is out of sight from their view of the cliff).
Human nature tends to fear what it doesn't understand. In my career - I'm 65 - I've seen a lot of shifts. I've seen a lot of catastrophizing - remember the Y2K bug and that bug being the possible end of the world? There have been many panics periods; these all-too-often have a basis in a fear of the unknown. Culture, such as entertainment, can anchor fear on top of this as well, as we tend to fear what we've learned to fear. What does watching a movie like Terminator teach us to fear? When something feels unknown to us, we make “sense” of that something through what we’ve seen, even if what we’ve seen is fiction.
Being courageous means that we don't have to fear new technology, but that we explore it. Vincent Van Gogh said, “If you hear a voice within you say, ‘you cannot paint,’ then by all means paint, and that voice will be silenced”; people who make it a mission to learn and play with these new technologies as well as incorporating them where they will be useful will be the ones who survive. All progress comes with a cost - we move forward, but we have to pay the price of moving forward. There's this great quote from “Inherit The Wind” which captures this sentiment:
Progress has never been a bargain. You have to pay for it.
Sometimes I think there's a man who sits behind a counter and says, "All right, you can have a telephone but you lose privacy and the charm of distance."
Madam, you may vote but at a price. You lose the right to retreat behind the powder puff or your petticoat.
Mister, you may conquer the air but the birds will lose their wonder and the clouds will smell of gasoline.
For instance, AI may eventually be able to better address edge cases, but then we have to address the biases or prejudices that come with these solutions. Think of the following hypothetical scenarios that potentially involve impact to human life or other life:
How many dogs would you be willing to hit with a car, if there was no other option, to save the human lives in the car?
How much would you be willing to invest in AI solutions, if you had data that showed every $1 million invested results in 1,000 people losing their jobs, which you have data that further showed a percent of these people would commit suicide?
Think of the type of people who would be willing to answer those questions or even entertain those scenarios! If they're the ones building the AIs, then their ingrained biases and prejudices create the solution and you the driver live with that solution. Ultimately, AI will reflect the values of the people who create them. This means that to use AI, you have to subscribe to these values - like the Terms of Service for software apps. But you may not agree with these values.
You mentioned earlier that since 2023, approximately 65,000 recruiters have been laid off globally. How much of this is actually related to AI?
Not all of these layoffs are related to AI. But when the push into AI started kicking into gear during last year, it had an effect on the industry because a recruiter is able to screen people efficiently now with these tools. Before you may have had a recruiter thoroughly screen up to 50 people a day. With AI, you may be able to significantly filter that initial amount further before proceeding.
A recruiter is able to do more now with these AI tools. But these tools may also cause them to overlook or miss talent that their human eyes would have caught or that would have stood out. In other words, recruiters can become dependent on tools that miss human elements I mentioned earlier. That may result in missing talent that they would otherwise find.
Think of an example here involving a recruiter using AI to transcribe an interview. They record an interview with the person and then have AI transcribe the interview. After this, the recruiter could do an AI summary of that interview. Or they could get multiple summaries of the transcribed interview from various AI tools. But this may miss on these key human elements. It may also transcribe statements wrong that could change the context of an answer.
Generally with recruiters using these tools I've noticed two types of recruiters here: one assumes the AI summary is correct and moves forward with that assumption. The other type of recruiter will read the summary and reflect over it along with what they heard in the interview. This latter recruiter isn't only taking the information from the summary, but is also balancing it out with those elements that AI doesn't pick up through summaries. This could result in a follow-up if the recruiter feels like a point was missed - and this may affect the outcome or understanding.
Given what you've seen so far with AI and talent along with the proliferation of these tools, how do you believe leaders could improve their organizations with or around AI?
There's a gap right now with AI. The big gap is education. People are not understanding what AI is and what it can do. For instance, some people don't realize that different LLMs get different results. You may also get hallucinations in your result. Leadership may not be evaluating these metrics.
Another improvement involves measuring the result and putting aside the hype. You get executives and human resources that read and hear hype such as "AI will save a lot of money in G&A!" They read this and rush to make it a reality. But they may implement something that costs them later and those costs could be much larger than any of the savings.
Some companies may be forgetting that there's an end user to all of this. In the talent space, that's the talent we recruit. In the product space, that's customers. Using an example of AI sending recruiting messages and saving big money on not needing to hire people to send the messages - yes, it saves money. But how does the talent receiving those messages feel? And how do these recipients feel over time as they interact with these AI messages and responses?
Note: Paid subscribers will read additional questions and thoughts by Steve along with some of my own experiences related to these points. When this post is published, I will link it here.
All images are either sourced from Pixabay or generated. All discussion content is owned by the participants of the discussion. None of this discussion may be shared with any artificial intelligence without permission from one of the participants.