By Marty Greenaway, Search Consultant
AI has changed the landscape of the hiring process for candidates, employers, and recruiters. As we learn how AI can make hiring easier, we’re also discovering how it presents serious ethical challenges to the process.
The two types of AI we see today are predictive and generative, and employers/recruiters can use them for several functions during the hiring process.
Predictive AI aims to forecast future outcomes based on historical data patterns while generative AI focuses on creating original content.
Examples of how Generative AI aids recruiters and employers include assisting in tasks such as creating materials for job descriptions, interview questions, and candidate engagement. For candidates, it offers an easy way to tailor resumes/cover letters to specific roles/companies as well as summarize job descriptions and can create potential questions to be prepared for in an interview.
Predictive AI can assist employers in identifying top candidates, forecasting needs, optimizing strategies, and improving fit during the hiring process. For candidates, predictive AI can help suggest which job postings are a good match for their resumes, give salary predictions, and provide skill gap analysis.
This all seems great, but when looked at beneath the surface level, we see that predictive and generative AI have significant flaws that can lead to unfair biases, security concerns, dehumanization of process, and an overall lack of transparency.
Both algorithms can inadvertently affect or even increase existing biases because AI systems are usually trained on historical hiring data that reflect past trends. These recommendations may not necessarily produce the best candidate but be biased to previous hires employers have had in the past, leading to potential further discrimination of gender, race, and age.
Some would say AI’s biggest flaw is security (please refer to Terminator movies 1 through 6 for context). The lack of security around AI systems collecting and analyzing large amounts of personal data poses a significant threat to the safety of the hiring process.
Candidates’ data being processed into AI systems should raise concerns about how personal information is used, stored, and protected. There are no regulations set on how long or where a candidate’s information is being stored, or if it’s safe.
Employers relying heavily on AI-driven tools offer a process that can feel dehumanized. AI tends to focus on measurable metrics, diminishing the human aspect of evaluating and engaging with candidates. This process can lead to generic responses and interactions leaving candidates feeling like data points rather than people with strengths.
The overall lack of transparency funnels all concerns with AI in the hiring process into one. Many AI systems operate as “black boxes” that collect information and use complex algorithms that are not easily interpretable. If AI is leading the decision-making process, there will be a lack of accountability around how and why candidates are rejected leaving hiring managers and job hunters further frustrated.
These ethical challenges within AI and recruitment expose a significant risk to the credibility and safety of the hiring process.
AI is a great asset and tool but should not be relied upon to make critical hiring decisions as the output can not be trusted to ensure fair, transparent, and respectful practice.