AI-Powered Compensation Recommendations and Pay Equity Claims: An Investigative Challenge for Workplace Investigators

 
AI-Powered Compensation Recommendations and Pay Equity Claims
 

In the dynamic landscape of the modern workplace, Artificial Intelligence (AI) has become an essential tool in shaping crucial decisions. Lauded by some as the solution to pay equity issues, other experts point out that this seemingly objective approach to compensation recommendations can perpetuate biases. So what is a workplace investigator to do when faced with AI-powered pay equity concerns in an investigation?

As workplace investigators, we’re charged with promoting equitable, bias-free workplaces. However, AI poses novel challenges to this mission. Compensation algorithms could perpetuate historical pay gaps, while negotiation chatbots could inadvertently discriminate. In my recent cases, these tools allegedly propagated inequities. This guide details my investigative strategies.

While proponents laud AI’s potential to eliminate individual biases through data-driven pay recommendations, the reality is more nuanced.

I’ll expound on 1) the risks of algorithmic pay models, 2) emerging chatbot negotiation tools, 3) subtle biases persisting in performance reviews 4)my investigation considerations inspired by the Association of Workplace Investigators (AWI)’s Guiding Principles.

Understanding the Ways AI is Being Used for Compensation Determinations and the Data Behind It

AI-driven compensation decisions are typically generated by algorithms that analyze a multitude of factors, including education, experience, job role, and even geographical location. The idea is to create a system that is free from human prejudices, but the reality can be far more complex.

But beyond internal compensation decisions, there is now software being offered for employers to use autonomous negotiation systems (think: AI-powered bots) that can negotiate compensation (salary, benefits packages, employee-related agreements) with employees, like Pactum. For example, Pactum explains that its systems can “analyze vast amounts of data, such as market salary trends, employee performance metrics, and company budgets, to determine a fair and competitive compensation package…[and] HR professionals can ensure more objective and data-driven negotiations, reducing the likelihood of individual bias and promoting fair and equitable outcomes.”

However, opponents of autonomous systems making recommendations or decisions on pay address concerns about the data used to train these algorithms. If historical data used for training reflects past biases. For instance, if the training data includes embedded gender pay gaps, the chatbot risks perpetuating these disparities in its negotiated offers. This demonstrates the persistent need for proactive auditing even as compensation processes evolve. AI data has historical context, and chatbots trained on biased data can propagate discrimination, however inadvertently.

How Subtle Biases Manifest in Performance Reviews and Pay

While managers may believe pay decisions are impartial, AI analysis reveals subtle gender biases lurking below the surface. A study analyzing data from 5 companies found men received 25% more positive performance reviews than equally performing women. Male managers in particular tended to score men higher, especially in senior roles.

This suggests that even formal equality policies cannot always counteract unconscious biases. Women get subtly penalized through weakened reviews and pay that do not reflect their actual contributions and skills. Only by auditing the granular data can investigators surface these obscured prejudices.

For instance, as recounted in a Harvard Business Review article, an autonomous analysis of a tech company’s records uncovered that while men and women were equally likely to meet stated performance goals, women received far fewer promotions and smaller pay increases compared to their male peers.

Again, this indicates that unconscious biases impacted managers’ subjective evaluation and advancement of female employees, despite objective metrics showing equal capability. Without exposing these hidden biases through AI audits, they become self-perpetuating.

One driver is affinity bias, where people gravitate toward those they relate to. A male manager may more instinctively advocate for employees who share their gender, leading to disproportionate support and rewards for male reports. Similarly, shared backgrounds, hobbies, alma maters, and other factors subtly advantage candidates that hiring managers “click” with.

These affinity biases frequently manifest in subjective opinions of employee potential and competence. A manager may see upper management potential in an employee who reminds them of themselves, discounting other qualified candidates. Or they may underline weaknesses in someone unrelatable while minimizing shortcomings in liked employees.

Without realizing it, managers allow affinity to color their assessment of merit. This shows why objective audits are essential - intuition alone cannot gauge if equity exists. Only by scrutinizing the data can investigators identify whether all employees have equal opportunity to advance based on skill, not subconscious bias.

Navigating Pay Equity in an AI-Driven World

Achieving pay equity has long been a goal for organizations striving to foster a fair and inclusive work environment. While AI is intended to contribute to this goal, it can inadvertently introduce or perpetuate biases.

One challenge lies in the interpretation of fairness. AI models might treat all employees with the same job title and experience level equally, but the question remains: is the historical pay equity already embedded in the data? For example, if a gender pay gap has existed historically, an AI model trained on this data might unwittingly perpetuate these disparities.

To navigate pay equity in an AI-driven world, investigators must go beyond the surface-level analysis of AI-generated recommendations. They need to scrutinize the outcomes and ensure that the AI system is not reinforcing existing inequalities. This requires a nuanced understanding of the intricacies of pay equity and a commitment to dismantling systemic biases that may be deeply ingrained in the data.

Challenges in Investigating AI Bias

Investigating bias in AI-powered salary recommendations poses its own set of challenges. The opacity of many algorithms, often referred to as the “black box” problem, makes it difficult for investigators to trace the decision-making process. Without a clear understanding of how the algorithm reaches its conclusions, it becomes challenging to identify and rectify biases.

Moreover, the evolving nature of AI systems adds another layer of complexity. Models are continuously updated and retrained based on new data, making it challenging for investigators to keep up with the changes and assess the ongoing fairness of the system.

To effectively handle investigations related to pay equity issues involving AI, it’s crucial for investigators to either gain a deep understanding of AI algorithms or partner with AI specialists. These experts can decipher the complex operations of these systems, such as the potential for reverse engineering, which can shed light on the decision-making processes of the AI systems.

It is equally imperative for investigators in this field to stay informed about the latest developments in AI governance, ethical standards, and best practices for bias mitigation in AI systems.

Strategies for Investigating and Mitigating AI Bias

Considering the AWI Guiding Principles for workplace investigators, we explore strategies and considerations for investigating claims involving disparities due to an employer's use of AI in pay decisions:

  • Document Investigation Process Thoroughly: In line with Guiding Principle Number 9, investigators should maintain detailed records of each step taken in the investigation. This should include the methods used to access and analyze AI models and data sets, interactions with data scientists or AI experts, and any challenges faced during the process.

  • Clearly Define the Scope of Investigation: Consistent with Guiding Principle Number 3, investigators should ensure that the investigation aligns with the employer's defined scope. This involves understanding the specific allegations of pay disparity and focusing on collecting evidence that directly addresses these concerns.

  • In-depth Review of AI Training Data: When evidence gathering, investigators may examine and document the data used to train the AI models for biases and disparities. This review should also note any historical pay disparities that could have influenced the AI’s decisions.

  • Analyze Predictive Variables and Outcomes: Similarly, investigators may need to investigate and document the variables used by the AI system and their potential correlations with demographic groups. Consider conducting and recording an outcome audit to identify any adverse impacts on specific employee groups.

  • Ensure Transparency in AI Systems: When engaging in investigation planning, investigators may seek comprehensible explanations from the employer about how the AI systems operate. This may require access to the data used to train the AI models, as well as the data inputs for individual salary decisions. This can provide insights into potential biases in the data sets. Investigators may consider documenting instances where the AI system's decision-making process lacks transparency.

  • Assess and Record Employer’s Mitigation Efforts: An investigator may need to evaluate the steps taken by the employer to mitigate bias in their AI systems. This includes examining bias prevention measures implemented during the design phase and the frequency of audits for fairness. Perhaps the system’s recommendations changed over time. This may be relevant to the system’s consistency as well as deciphering when the employer was on notice of any potential problems of bias (and when mitigation efforts may have been taken). 

  • Gather and Evaluate Evidence Rigorously: When evidence gathering, an investigator may consider collecting evidence that includes AI model outputs, pay decision records, and employee interviews. Evaluate specific versions of AI models being investigated, including any updates or changes made during the investigation period. This should include a review of the training data, algorithmic changes, and output decisions over time. This documentation may prove crucial for understanding how evolving models might affect pay decisions. Apply the "preponderance of the evidence" standard to assess whether pay disparities more likely than not occurred due to the AI system.

  • Preparing a Comprehensive Report: A report may include a detailed description of the investigation scope, the process followed, the evidence gathered, and the findings. It may also articulate the rationale behind each finding, based on the evidence. Thoroughly document any obstacles encountered in the investigation, such as the inability to access certain data or understand specific parts of the algorithm. This can be crucial for providing context to the findings.

By adhering to these strategies, consistent with the AWI Guiding Principles, workplace investigators can take steps to conduct thorough, fair, and effective investigations into claims of pay disparities involving AI, ensuring that their findings and recommendations are well-supported by evidence and aligned with the employer's defined scope of investigation.

Conclusion

In conclusion, the expanding influence of AI in compensation decisions brings forth unique challenges and responsibilities for workplace investigators. To ensure fairness in these algorithmically-driven pay models, it's essential for investigators to not only have a deep understanding of AI systems but also to commit to ongoing education and adaptation in this rapidly evolving field.

This journey towards equitable AI usage in the workplace transcends mere technical adjustments. It calls for a systemic approach aimed at dismantling long-standing inequities and unconscious biases. For investigators, this is an unparalleled opportunity to guide AI towards fostering more equitable workplaces.

Embracing this challenge means staying informed and up-to-date with the latest developments in AI technology and its applications in the workplace. It involves continuous learning and adaptation to keep pace with the advancements in AI. By doing so, investigators can play a pivotal role in ensuring that AI serves as a tool for fair compensation, contributing to the creation of workplaces where equity and fairness are the norm.


Navigating the complexities of AI-driven pay equity demands expertise, transparency, and collaboration. Contact Moxie today to learn more about our workplace investigation services for pay equity investigations, including those involving AI algorithms.

Previous
Previous

Conducting Effective Workplace Investigations: Best Practices for Florida Employers

Next
Next

Revolutionizing Workplace Training: Harnessing AI for Skill Development and Upskilling