Episode 28 | Smart Hiring or Smart Risks? AI’s Impact on Employment Law
Play to Listen
Summary
Artificial intelligence is reshaping how employers screen candidates and make hiring decisions—but with this power comes responsibility. Join Frantz Ward Partner Megan Bennett and Associate Stacey Sanderson as they explore the intersection of AI and employment law. Discover how AI is changing workplace dynamics, the legal implications of its use in hiring, and the risks of bias and discrimination. From real-world cases to emerging regulations, they provide practical guidance for employers looking to integrate AI into their hiring processes while ensuring fairness and compliance.
Podcast First Aired: December 5, 2024
Transcript
Stacey Sanderson:
Hello everyone, and welcome back to another edition of Frantz Ward Podcast, Shoveling Smoke, where we explore the latest trends shaping the legal industry.
I’m Stacey Sanderson, an associate attorney in Frantz Ward’s Labor and Employment Practice Group, and I will be your host for today’s episode. Thank you everyone for tuning in.
In today’s episode, we’ll be exploring the impact of AI in the labor and employment space, and the implications involved in employment screening and hiring processes, the use of AI in the workplace, and the potential for widespread discrimination if it’s not used appropriately.
Here with me today, is Megan Bennett, a partner in Frantz Ward’s Labor and Employment Group. Megan, thank you so much for being here.
Megan Bennett:
Thanks, Stacey. I’m happy to be here and share information on this constantly evolving and very new topic.
Stacey Sanderson:
Megan, let’s jump right in. How is AI being used by employers Right now?
Megan Bennett:
It seems like every week I’m getting an email about a new type of HR-related AI product, and that’s because AI can be a really great tool for employers. It can automate some of those repetitive or routine tasks that can be really time-consuming.
And so, because of that, it can really streamline things for HR professionals. The most prevalent way that I’m seeing employers use AI right now is for screening applicants for potential employment.
Stacey Sanderson:
So, what you’re seeing is that AI is not just about automating those basic tasks, but is actually making decisions about people’s careers?
Megan Bennett:
Yeah, so some of these AI tools that I’m seeing for screening applicants are: there’s resume scanners that can prioritize applications using certain keywords. There’s also chat bots or virtual interviewers that ask job candidates about their qualifications, and then reject those candidates who don’t meet the employer’s predefined requirements.
There’s also video interviewing softwares that evaluate candidates based on their facial expressions or speech patterns.
Stacey Sanderson:
Let’s dig a little bit deeper into these programs. How exactly do they work?
Megan Bennett:
So, basically what happens is that data of what is considered to be a “good applicant” is loaded into the system. The program then compares the current applicant to the data set of what the employer considers to be a “good applicant.” The program will then accept or reject or score an applicant based upon how they compare to that good dataset.
Now, as more applicants are accepted or rejected, the AI learns the employer’s preferences and it’ll start to change the dataset based upon those preferences. So, basically, it learns what you like, and it starts changing how it makes decisions based upon that.
Stacey Sanderson:
While these sound like great tools for employers, what’s the catch?
Megan Bennett:
Yeah, so unfortunately, with all of these technologies, they’re so great and they can really be helpful to employers, but there is a lot of times a catch that comes with them.
Here, the problem is if there are any unintentional biases in the data, the AI results may become impacted by those unintentional biases. This then leads to unintentionally discriminatory hiring decisions, which as we all know as HR professionals and employment lawyers, leads to claims of disparate impact discrimination.
So, to give you an example, a couple of years ago, Amazon developed an automated hiring program which was trained to accept or reject applicants based upon resumes of accepted applicants over the previous 10 years. So, basically, people that Amazon had already hired.
The problem was that during those previous 10 years, more men than women had applied to jobs at Amazon, meaning that more men than women were hired for jobs at Amazon. So, the AI picked up on that, and because of that, dataset was skewed in that way, the program was favoring male applicants over female applicants.
This was obviously an unintentional result, but it did have a discriminatory impact, and this shows how those unintentional biases in the data can have really serious consequences.
Stacey Sanderson:
So, given this potential for this sort of disparate impact discrimination, recently, the EEOC has issued guidance on the use of AI making these types of employment decisions. Could you, Megan, walk us through what this guidance provides and how employers can rely on it to address these concerns?
Megan Bennett:
Yes. So, in short, the EEOC said, “Hey, remember how a couple decades ago we put out this guidance about disparate impact discrimination?” They said that guidance applies when you’re using AI for employment decisions as well.
So, just to give you a quick reminder of that guidance — basically employers need to consider does the AI program cause a selection rate for individuals in a protected group that is substantially less than the selection rate for individuals in a non-protected group?
Now, if that is happening, employers then need to ask is using that program job-related and consistent with business necessity?
Now, what I just went over, the disparate impact guidance, that’s nothing brand new. So, the biggest takeaway, the biggest new piece from the EEOC’s guidance is basically employers saying like, “Oh, that wasn’t us. That was AI, the AI did it. That wasn’t us.” And kind of pushing the blame off to the AI system.
That kind of defense is not going to be a defense to the EEOC. Basically, the EEOC is like the AI, that’s a great helpful tool for employers, but the employer is ultimately responsible for its own employment decisions, and that’s even if the AI program was developed by a third-party vendor.
Stacey Sanderson:
Is there anything that employers can do to protect themselves from these risks and liability if they are using a third party AI product?
Megan Bennett:
Yeah, so that’s a great question because realistically, most employers are going to be using a vendors program here. So, here is what I suggest.
First, you need to ask the vendor what steps have been taken to evaluate their AI tool for biases. If this company hasn’t taken any steps or that doesn’t seem to be an issue on their radar, you need to find a new vendor. The stakes are too high to roll the dice on that.
Second, you need to find out what actions is the vendor currently doing or will the vendor do in the future to ensure the software continues to be bias-free? Because like I said, the AI learns from your preferences and the dataset changes over time. So, you need to constantly be checking it to make sure that the information it has learned has not changed how it’s making decisions and making biased decisions.
Now, again, even after doing this really important due diligence about the vendor, if the tool does still result in discrimination, again, the EEOC says the employer is still on the hook. You can’t pass the blame off to the vendor.
So, I suggest to employers to also do regular self-analysis of your AI products to make sure that your AI hasn’t gone rogue and isn’t making decisions based on biases.
Stacey Sanderson:
It seems like the EEOC is setting pretty firm expectations here and that this guidance is quite strict. And I think this leads nicely into my next question, which stems from the case of EEOC versus iTutor Group.
This was a lawsuit filed in 2022 by the EEOC against a tutoring provider over alleged age discrimination and hiring. Megan, what happened in that case and what are some of the key takeaways?
Megan Bennett:
Yeah, so this case actually resulted in the EEOC’s first AI settlement, and the settlement was entered back in August of 2023. Basically, what happened is that the AI used by iTutor Group was automatically rejecting female applicants who were aged 55 and up, and automatically, rejecting male applicants who were aged 60 and up.
Now, the way that they figured out that this happened was because a rejected applicant resubmitted their application using a different birth date, and after they resubmitted their application, they were accepted for the position. So, that raised some red flags here obviously.
As part of the settlement, iTutor Group agreed to pay $365,000, and they had to invite all rejected applicants to reapply and be considered based on their qualifications rather than their age. And again, this case really exemplifies the EEOC’s guidance. Employers are still liable even if the technology was ultimately to blame for the discrimination.
Stacey Sanderson:
Well, thank you for that. That is both fascinating and alarming. I think that a common theme running through our entire discussion today is really that the increased use of AI and employment decisions ultimately means that many employers are finding themselves sort of face-to-face with these complicated compliance questions.
And speaking of compliance, let’s shift gears for just a moment to talk about some of the recent legislative updates concerning AI and employment law. Megan, what’s going on out there and what could the future of the legal landscape in this area look like?
Megan Bennett:
Well, right now, there are laws regarding AI in employment, in effect in about five states. However, legislation has been introduced in several other states as well. Whether that legislation will pass in all of those states remains a question mark.
But if I’m getting out my crystal ball, I expect that laws about AI are going to become the new ban the box, the new paid sick leave, or the new marijuana testing laws, meaning that over the next few years we’re going to develop a patchwork of state laws with varying rules for employers on how they can and cannot use AI in making employment decisions.
Stacey Sanderson:
It sounds like this is something that employers and especially multi-state employers will want to keep a close eye out for.
Megan Bennett:
Definitely.
Stacey Sanderson:
So, we’ve talked a lot about how employers are using AI to make decisions, but what about the other side of the coin? How are employers using AI in the workplace aside from using it to make these employment decisions, and how are they monitoring or regulating their employees’ use of AI in the workplace?
Megan Bennett:
So, first, I think what employers need to do is determine its stance on AI. Meaning are you going to completely ban the use of AI by your employees? Are you going to allow your employees to use AI? And then if you are going to allow it, are you going to limit how employees use it? And I would suggest that you do if you’re going to be allowing it. I would limit that.
Now, regardless of your organization’s stance, the expectations should be communicated to your employees. If you’re going to allow AI, I suggest being very clear about what is and is not acceptable for your employees to be using AI.
Stacey Sanderson:
So, there really is a lot to consider here. Megan, while we’re talking about this, are there considerations that employers should keep in mind when they’re sort of grappling with all of this information, going through the process of creating their own policies concerning the use of AI in the workplace?
Megan Bennett:
Yes, definitely. So, there’s a couple of things I suggest be included in a policy like this. I suggest that you require employees to seek approval from a manager prior to utilizing AI on a project.
I also suggest requiring extensive human review of anything created from AI. And third, you really should prohibit the input of work product or information provided by a client or third party or any type of confidential information into a public AI site.
For example, something like ChatGPT, because once you’ve done that, it’s lost all of its confidentiality, and it is now part of ChatGPT’s brain. It’s part of that data set now, and it’ll be included in other decisions that ChatGPT is making.
Now, because AI is so new, it’s going to be changing every single day. So, it’s likely that once you draft this policy, it really needs to be a living document and you’re going to need to review and update the policy pretty regularly.
And I also think it’s important that this isn’t just an HR job to create this policy. You need to get other stakeholders involved too. So, getting IT and data involved I think is going to be really helpful in making sure this is a meaningful and impactful policy.
Stacey Sanderson:
Thank you for that insight. It’s clear that AI is changing how employees work. There are really no two ways about that. It’s also clear that there is a lot to think about here in terms of how employers can and should regulate that use to protect them from these risks and liability if it’s not used appropriately.
So, Megan, we’ve covered a lot of information, and as we wrap up today’s episode, what are some key takeaways that you would like to leave with our listeners?
Megan Bennett:
I think the two main points I want to highlight here are, number one, to just repeat what the EEOC has said: human oversight is key. AI is not a replacement for HR and it’s not a replacement for following employment laws.
You really need to be mindful of the vendors you’re using to select your AI tools, and you need to develop processes for periodic review of AI systems and tools.
[Music Playing]
And then the second key thing I want you to walk away with today, is that you really need to determine your organization’s stance on AI use by employees. You need to communicate that stance, and then you need to periodically review and update that stance because things are always changing.
Stacey Sanderson:
Thank you, Megan. It’s been an absolute pleasure to speak with you, and I really enjoyed our conversation today.
Megan Bennett:
Thanks for having me, Stacey.
Stacey Sanderson:
And that’s all for today. If you like our podcast, please share with your connections and hit subscribe to be the first to know of new episode releases.
Thanks again for listening, and we will catch you next time.
Shoveling Smoke is a production of Evergreen Podcasts. Our producer and audio engineer is Sean Rule-Hoffman. Thanks for listening.