AI Bias in the Spotlight: Investigating AI-Driven Hiring Claims in the Modern Workplace

 
AI-Driven Hiring Claims in the Workplace
 

In the ever-evolving world of HR and recruitment, artificial intelligence has emerged as a game-changer. AI-driven candidate screenings are revolutionizing the hiring process, making it more efficient and data-driven. However, this transformation brings both promise and challenges. In this blog post, we'll explore the fascinating realm of AI-driven candidate screenings, discussing their implications for HR investigations. We will delve into the potential pitfalls, sources of bias, and the crucial role that workplace investigators play in ensuring fairness. Understanding the significance of transparent AI is also essential in this AI-driven era of recruitment.

The AI Revolution in Recruitment

Artificial intelligence has significantly altered the landscape of recruitment. Traditional hiring methods are often time-consuming, subjective, and susceptible to human biases. AI-driven candidate screenings promise to streamline the process, making it faster, more accurate, and less biased.

AI algorithms can swiftly analyze a large pool of resumes, evaluate candidates based on predefined criteria, and even predict their future job performance. They can sift through massive datasets, such as online profiles and social media activity, to generate a more comprehensive view of a candidate's qualifications and suitability. These capabilities can dramatically reduce the time and resources spent on recruitment, freeing HR professionals to focus on more strategic aspects of their roles.

If AI was involved in the matter to be investigated, consider:

While AI-driven candidate screenings offer great potential, they are not without their pitfalls. It's crucial to be aware of these challenges to ensure the effective and ethical use of AI in recruitment.

1. Data Biases: AI systems learn from historical data, which can be riddled with biases. If a company's past hiring decisions were biased, the AI algorithms may perpetuate these biases. For example, if certain demographic groups were underrepresented in the past, AI may continue to underrepresent them. Or, if a model resume is used for scanning individuals with similar backgrounds, an algorithm could learn that certain words are preferable in a resume (utilizing the model) while others are not. Take the story of an Amazon AI recruiting tool that showed discrimination towards women. Recognizing and mitigating these biases is a critical task for HR professionals and AI developers.

2. Lack of Transparency: Many AI algorithms are seen as “black boxes.” They make decisions based on complex calculations that are difficult to explain or understand. A lack of transparency can create skepticism and mistrust among candidates, which can undermine the hiring process. An employer trying to defend an employment decision based on an algorithm it does not know or understand may have difficulties showing that it had a legitimate non-discriminatory defense to a discrimination claim.

3. Overreliance on Technology: While AI can enhance the recruitment process, overreliance on technology can depersonalize it. Candidates appreciate the human touch in the hiring process, and too much automation can alienate potential employees.

4. Legalities: We’ve now seen technical documents from the EEOC on best practices for AI tools in employment under the Americans with Disabilities Act as well as Title VII. So we know that the EEOC intends to enforce its laws against improper use of AI. We are also seeing many states and jurisdictions propose their laws on AI use. NYC Local 144 is the first AI law to regulate automated employment decision tools, but not the last. 

Sources of Bias in AI-Driven Candidate Screenings

Understanding the sources of bias in AI-driven candidate screenings is crucial for HR investigators. Biases can manifest in various ways:

  • Data Bias: As mentioned earlier, AI algorithms learn from historical data. If past hiring decisions were influenced by prejudices or systemic biases, the AI system may perpetuate these biases.

  • Algorithm Bias: The algorithms themselves can be biased. If not designed carefully, they may inadvertently discriminate against certain groups. For example, an algorithm might give less weight to a candidate's qualifications if they have a non-traditional education background.

  • Feedback Loop Bias: AI systems can create feedback loops that reinforce bias. For instance, if a company predominantly hires people from a specific educational institution, the AI might prioritize candidates from that institution, perpetuating the cycle.

  • Human Bias: Humans can introduce bias when they provide feedback and assessments that AI systems learn from. This can happen when human reviewers are influenced by their own biases, consciously or unconsciously.

The Role of Workplace Investigators in AI Bias Claims

Workplace investigators are increasingly called upon to probe into claims of discrimination stemming from AI use in hiring. Their roles may be multifaceted, depending on the type of claim or investigation being conducted:

  • Understanding the AI System: First, if an AI system is involved, it’s crucial to understand how it was used. To do this, investigators must comprehend how the AI system functions, including its data sources, algorithmic design, and decision-making processes. Who trained it and how?

  • Identifying Sources of Bias: If the complaint or investigation involves concerns of bias, then the investigator may be looking to potential sources of bias within AI systems, such as data biases, algorithmic biases, and human-induced biases.

  • Gathering Data: When it comes to an investigation involving AI, collecting relevant data is crucial. Investigators should seek information on how the AI system was trained, the data it uses, and any previous audits or assessments conducted. Keep in mind that the employer may not be the entity that holds this information, especially if a vendor was used. In such cases, the investigator may need to look to outside resources to gain this information to conduct a thorough investigation. 

  • Reviewing Legal Compliance: If the claim or investigation involves a potential violation of law or policy, understanding those relevant laws and regulations is also helpful. For example, NYC Local Law 144 requires a bias audit before using an AI-driven employment decision tool (AEDT) and mandates notification to job candidates. We provide more on this groundbreaking law below:

NYC Local Law 144 of 2021: A Blueprint for Investigation

NYC Local Law 144 provides a framework that can guide investigations into AI bias claims. Key aspects include:

Definition of AEDT: AEDTs include tools that use machine learning, statistical modeling, data analytics, or artificial intelligence to substantially assist or replace discretionary decision-making in employment. Why is this important? Because if a claim or investigation involves AI or potential AI, we want to understand the different ways in which “AI” may be defined. NYC Local 144 provides one idea based on how it defines AEDTs. 

Bias Audit Requirements: The law mandates an impartial bias audit of the AEDT, including calculations of selection rates and impact ratios across different demographic categories. If the investigation involves the use of an AEDT, an investigator may be looking at whether the employer conducted a bias audit before it deployed its use in the workplace. NYC Local 144 outlines requirements for employers to follow when using AEDTs. If a bias audit was conducted and indicated disparate impacts, investigators may consider assessing whether the employer or agency has taken steps to mitigate these biases.

Data Requirements for Audits: The audit must use historical data from the employer’s or employment agency’s use of the AEDT. Test data can be used if there is insufficient historical data.
Transparency:  NYC Local 144 requires employers using AEDT to provide notice to those who may be impacted by their use. Investigators may look into the ways that the employer put the affected individuals on notice of AEDTs and the extent of information conveyed.


Approaching Investigations in Light of Known AI Challenges

Practical Tips for Workplace Investigators:

  • Examine Bias Audits: Was a bias audit or impact assessment conducted? If so, review the summary of bias audit results, focusing on the methodology, data sources, and findings.

  • Collaborate with Experts: Understanding complicated data may require the use of experts, like data scientists or AI experts who can assist with explaining technical aspects of AI systems and interpret audit results. Do you need to engage third parties, and, if so, is that within the scope of your investigation?

  • Potential Corrective Measures: Based on findings, workplace investigators may consider suggesting measures to mitigate biases, such as refining the AI’s training data, conducting broader impact assessments, or adjusting algorithmic parameters.

Navigating the Complex Terrain of AI in Hiring

Workplace investigators play a pivotal role in addressing AI bias in hiring. By understanding the intricacies of AI systems, staying informed about relevant laws and regulations, and applying a structured approach to investigations, they can effectively navigate claims of discrimination and contribute to fairer and more equitable hiring practices.

Are you an employer with a concern about whether your use of AI in employment decisions complies with current regulations? Or have you received an employee complaint about improper use of AI in employment decisions? Moxie Mediation can help. We offer neutral AI workplace investigation services for employers in multiple states. Contact us today for more information about our services.

Previous
Previous

Rise of AI in Employee Wellness: Implications and Challenges for Workplace Investigators

Next
Next

Workplace Investigations in an AI Employment World: Tips for Investigators