Please ensure Javascript is enabled for purposes of website accessibility

AI in Employment — A Troubling Issue in the Hiring Process

AI in employment has changed the hiring process forever. But is that really a good thing? Some troubling studies suggest otherwise.

Carissa Davis //March 3, 2023//

AI in Employment — A Troubling Issue in the Hiring Process

AI in employment has changed the hiring process forever. But is that really a good thing? Some troubling studies suggest otherwise.

Carissa Davis //March 3, 2023//

After the Equal Employment Opportunity Commission recently indicated it intends to increase scrutiny over employers’ use of AI in employment, recruitment, hiring and disciplinary decisions, employers are well advised to do the same. Automation in employment decisions usually goes one of two ways — it mitigates or increases bias.

With 99% of Fortune 500 firms and 25% of small companies using some form of AI in their employment processes, employers of all sizes now face legal exposure that did not previously exist. 

READ: Artificial Intelligence and Automation in the Workplace

In its most recent public hearing, the EEOC hosted expert witness testimony on how AI may affect employer liability under the Americans With Disabilities Act, Title VII of the Civil Rights Act of 1964 and the Age Discrimination in Employment Act, among other civil rights laws. The hearing comes on the heels of the EEOC’s announcement that its enforcement priorities for 2023-2027 include AI in employment. This priority is unsurprising, considering the EEOC recently sued three companies for using online recruitment software that allegedly automatically rejected otherwise qualified candidates because of their age and gender. In that same month, the EEOC issued guidance on how AI can exclude disabled workers. 

The potential implications of “AI in employment” are vast, but below are a few practices that, according to recent guidance, are likely to place employers in the EEOC’s crosshairs. 

Implicit bias in — disparate impact out

“Facially neutral” criteria can operate to exclude certain protected classes. When neutral policies/criteria disparately impacts employees of a protected class, the risk for a legally viable claim is high. Here are a few EEOC-provided examples of how this plays out in the AI sphere.

Many employers regard gaps in employment as a “red flag” and could ask AI to de-prioritize applicants with gaps. The result would likely exclude women (due to parental leave) and individuals with disabilities. 

READ: Veteran Unemployment: Untapped Workplace Resources

An employer could ask AI to prioritize workers in the ZIP codes near the work site. However, because of redlining, the employer may unintentionally exclude applicants whose families were historically forced to reside in other ZIP codes due to their race.

Personality tests have grown in popularity, but if an employer asks AI to exclude applicants who do not “exhibit optimism,” the test could screen out an otherwise qualified applicant with Major Depressive Disorder, which would violate the ADA. 

The AI as “decisionmaker” may be no defense in light of user preference adaptation.

Human intervention is not a cure-all, and machine learning could institutionalize existing practices. Here are a few examples.

If an employer identifies a group of “good” employees and seeks to hire individuals who display the same traits, automated machine learning and user preference adaptation may result in outcome replication. Employers will end up with workforces identical to their current ones, which could stifle innovation brought by new perspectives and perpetuate the underrepresentation of traditionally underrepresented groups and give rise to legal liability.

If an HR representative reviews applications and gives the “thumbs up” or “thumbs down” rating, the machine will learn and adapt to meet that HR representative’s preferences. However, the EEOC’s focus on the role unconscious bias plays in most of these assessments, backed by the well-established fact that humans tend to prefer people who are “like” them, means that AI’s “learned” preferences opens the door to legal liability. 

READ: How Business Leaders Can Embrace a Multigenerational Workforce 

While the use of AI itself will not get an employer in trouble, the EEOC has made clear that it is incumbent upon employers to educate themselves on the risks and benefits of such use and the potential processes and outcomes tied to it. “AI made me do it” will not be an effective defense. Employers should be prepared for increased scrutiny if they use AI, as the EEOC has indicated it is considering workplace audits for such companies, like those used in pay equity, as part of its “crackdown” on the unintended consequences of AI use. 

To say that AI and automated machine learning in employment is a nuanced topic is, quite frankly, an understatement. Technological advancement is ever-evolving, and the EEOC guidance likely will lag far behind innovation. However, there can be no doubt that employers who use AI may attract the attention of the EEOC. Employers need to ensure they stay proactive to detect insidious legal risks, lest they find themselves the “test case” to develop new laws. One of the best ways to stay ahead of the curve is to speak with experienced Labor & Employment counsel.

Davis CarissaReagan MelissaCarissa Davis, an associate in the Labor & Employment department, provides all-inclusive services involving federal and state anti-discrimination law, wage and hour law, and labor disputes and negotiations.

Melissa Reagan is a member in Sherman & Howard’s Trial department, where she is a member of the firm’s Data Security and Privacy group.