Aon | Financial Services Group
Risks of Artificial Intelligence – A Spotlight on Employment Practices Liability
Release Date: August 2023 How will Artificial Intelligence (AI) create risk in the context of employment contracts?
AI is defined by the United States Congress as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
In the employment context, AI usage could take the form of resume scanners prioritizing applications by using keywords, software that monitors employees, virtual assistants interviewing applicants about experience or qualifications, video interviewing software that evaluates candidates based on expressions or speech and testing software that scores applicants, to name a few.
Anticipating AI’s expansion into the employment space, the Biden Administration issued voluntary guidance in its Blueprint for AI Bill of Rights for private employers as well as two Executive Orders that ask federal agencies to guard against algorithmic discrimination in the use of AI in the government’s own hiring practices.
According to the Uniform Guidelines on Employee Selection Procedures under Title VII, adopted by the Equal Employment Opportunity Commission, discrimination in the use of AI could be signaled by a selection rate for individuals in one protected group that is substantially less than for another. It suggests the application of its four-fifths statistical rule to assess any disparate impact in a selection process.
This means that if one group is hired at a rate of less than 80% when compared to the group that makes up most of the hired population, adverse impact or discrimination would be present. Note that employers can also be responsible for violations caused by tools designed or used by a vendor if the company uses that tool as part of its hiring process.
Even with the above referenced federal aids, there are no current, mandatory federal requirements with respect to usage of algorithm tools in the employment process. In contrast, states and local governments are implementing their own initiatives requiring varying levels of transparency, impact studies, and regulation around the use of algorithm tools. For example, California requires explicitly informing affected persons when algorithms are being used for important employment decisions. Pennsylvania has a bill pending that would require companies to register their algorithmic system to be evaluated and regulated.
Criticism of early proposed and enacted legislation includes concern over the lack of a worker’s private right to action in certain jurisdictions. The District of Columbia, however, is paving the way by proposing legislation with a private right of action under its bespoke “Stop Discrimination by Algorithms Act of 2023” with the recoverability of punitive damages and attorney’s fees in conjunction with civil liability for violators of up to $10,000 per violation. Other states with enacted and pending AI laws include, but are not limited to: Colorado, Connecticut, Hawaii, Illinois, Indiana, Maine, Massachusetts, Maryland, Minnesota, Montana, New Hampshire, New Jersey, New York, Oregon, Rhode Island, South Carolina, Tennessee, Texas, Vermont, Virginia, Washington, and West Virginia.
To date, we have not seen a notable increase in suits alleging violations of employees' rights through the use of AI in the hiring, retention and advancement processes. Should these types of action arise, employment practices liability (EPL) policies could offer liability protections presuming no computer-related or AI exclusions are added to the policies. The availability of coverage would depend on how the AI claims are alleged because EPL policies require a claim that suggests a violation of a defined peril to access its limits. Possible triggers could include allegations of technological processes resulting in discrimination, an adverse employment decision including the failure to hire or promote or invasion of an employee’s privacy. EPL policies’ broad, open-ended definitions should be sufficient, but as AI usage becomes more common adding express cover for employment-related violations resulting from the use of technological systems may eliminate uncertainty.
As companies look to encourage automation and leverage technology, they should engage employment counsel to assess the focus of such technical assistance and ensure compliance with state and local regulations. They should also seek guidance on the exposures the technologies used by third party vendors may represent to the company, when the tools provided by these vendors are used in areas like hiring, retention, and advancement. Contact your Aon representative for more details on the impact these new exposures might have on your purchase of insurance, particularly regarding employment practices liability.
Read Aon’s AI fact sheet here.
If you have questions about your coverage or are interested in obtaining coverage, please contact your Aon broker.
Contact
Discuss this article with Financial Services Group professionals Samantha Manfredini Look or Thomas Hams.
Samantha Manfredini Look
Vice President, Employment Practices Liability Insurance
Chicago
Thomas Hams
Managing Director and Employment Practices Liability Leader
Chicago