Artificial Intelligence is Only Human: Protect Your HR Team From Potential Risks
For years, we have become accustomed to the idea of Artificial Intelligence (AI) in the workplace. Computers have enhanced and modernized the way organizations operate, doing jobs historically performed by humans.
More recently, AI has become prevalent in the hiring process for thousands of companies. Instead of people sorting through stacks of resumes to find their ideal candidate, that function can now be handled by AI in a fraction of the time it takes for the human eye to review.
Having AI do the tedious work sounds like a dream, but its faults could lead to a nightmare.
AI can raise concerns about discrimination and bias in hiring. The technology allows an employer to narrow the search for their ideal candidate by focusing on certain key indicators that previous successful candidates have possessed. That is, programmers can set up algorithms to find their most suitable candidates by including specific search details.
The problem here is that you may be inadvertently excluding those from the talent pool based on information within their resume that does not fit the defined search criteria. For example, if your algorithm is designed from a pool of past successful candidates, it may limit searches to those individuals that fall into a specific gender, age, national origin and race.
That’s where it gets risky.
By automatically removing qualified candidates from the selection pool, an employer may be exposed to allegations of discrimination and may be in violation of Title VII of the Civil Rights Act of 1964.
Title VII prohibits intentional discrimination in employment, including selection procedures that can unreasonably exclude persons based on race, color, religion, sex or national origin. This targeted selection process is known as “disparate impact” discrimination.
The Equal Employment Opportunity Commission (EEOC) is actively looking at AI trends and has established guidelines to ensure that the use of AI software and other emerging technologies used in the hiring process, along with additional employment decisions, comply with the federal civil rights laws that the EEOC enforces.
There are ways to protect your company. Among them is taking advantage of the EEOC’s guidance around the way algorithms are developed. The EEOC also provides guidance on the use of AI in employment decisions, identifying promising practices, and holds listening sessions with key stakeholders about algorithmic tools within employment decisions.
Ultimately, if your organization is using AI in your hiring practices or for other employment-related decisions, understand the potential exposure and ensure you have addressed algorithmic fairness in using these tools.
Organizations should also have the proper insurance coverage to protect them against allegations of discrimination or bias in the hiring process. An employment practices liability policy is designed to cover an organization for various employment-related allegations, including discriminatory practices.
An annual review of your employment practices liability policy should be conducted with your broker to address exposures for your organization.