From initial job postings to final hiring decisions, AI-driven tools are reshaping how companies evaluate and select employees. While these technologies promise unprecedented efficiency, they also introduce a complex web of legal and ethical challenges, chief among them the risk of algorithmic bias and discrimination. The question is not whether AI can discriminate, but how to prevent it from doing so.
The legal theory of “disparate impact” is particularly relevant to AI hiring systems. Under this doctrine, employers can be held liable for discrimination even if they didn’t intend to discriminate, provided their practices have a disproportionate negative effect on protected groups. This means that an AI system that consistently screens out candidates from certain ages, genders, racial or ethnic groups could violate federal law, regardless of whether the algorithm explicitly considers those factors. For example, AI may identify and use seemingly neutral data points that are, in fact, proxies for protected characteristics. A candidate’s zip code could be a proxy for their race or national origin. Gaps in employment history, which can be a proxy for gender, may also be unfairly penalized. Because AI models “learn” from the data they are trained on, if that data reflects historical or societal biases, AI potentially can learn and perpetuate those same biases.
Employers are responsible for the outcomes of their hiring practices, regardless of whether those decisions are made by a human or an algorithm. The EEOC has issued guidance emphasizing that AI selection tools must not result in “disparate impact,” which occurs when a neutral policy or practice disproportionately disadvantages a protected group. The EEOC treats algorithmic tools as selection procedures” under Title VII, expects adverse impact monitoring and warns that employers can remain liable even when vendors design/administer the tool. The EEOC’s position is clear: an employer cannot evade liability by outsourcing their selection procedures.
A growing number of jurisdictions are enacting specific regulations for AI in employment. New York City’s law, for example, requires employers to conduct an independent bias audit of any automated employment decision tools. States like Illinois and Colorado have also passed laws requiring employers to prevent algorithmic discrimination and provide notice to applicants when using AI. Recent court decisions also establish that AI vendors can be held liable as “agents” of employers, representing a fundamental shift in liability. In Mobley v. Workday, a federal court allowed collective action to proceed based on allegations that a software company’s AI screening tools caused disparate impact against applicants 40 and older, and recognizing potential vendor liability under an employer “agent” theory.
Strategies for Employers
Given the significant legal risks, employers must be proactive in their use of AI. Here are critical steps to take:
- Conduct a Disparate Impact/Bias Audits: Before deploying any AI tool and on an ongoing basis, regularly audit its outcomes to ensure it is not disproportionately screening out protected groups. If a disparate impact is found, the tool must either be revised or validated as job-related and consistent with business necessity.
- Maintain Human Oversight: AI should be a tool to augment, not replace, human decision-making. Ensure that a human is “in the loop” and can review and override any automated decisions.
- Vet Your Vendors: Employers cannot escape liability by outsourcing. Before purchasing an AI tool, demand transparency from the vendor regarding their bias testing, data sources, and the algorithm’s decision-making process. Include contractual protections that hold vendors accountable for discriminatory outcomes.
- Provide Reasonable Accommodations: Ensure AI tools are accessible and offer alternative assessment methods for candidates with disabilities, as required by the ADA.
- Be Transparent with Candidates: Inform applicants and employees when AI is being used in the hiring process and explain how it will affect them.
- Develop Clear Policies: Establish clear internal policies on the ethical and legal use of AI in hiring and provide training to HR staff and managers on how to use these tools responsibly and identify potential bias.
The integration of AI into hiring processes represents a transitional moment in employment law and practice. The technology that was intended to eliminate human bias potentially creates new forms of systematic discrimination. Courts continue to apply traditional anti-discrimination principles to AI-powered hiring decisions, and the legal trajectory clearly favors expanded liability rather than reduced oversight. For employers, the path forward requires a fundamental shift in thinking about AI hiring tools. Rather than viewing them as neutral technological solutions, they must be understood as powerful systems that require active management.