Artificial intelligence technology plays a major role in your life, even if you’re not aware of it. For example, perhaps you’ve applied to a job recently. It’s possible the company with whom you’re seeking employment is one of many that have begun using AI tools to sort through applications.
Reasons for doing so include saving time and, theoretically, more easily finding ideal candidates to fill roles. An AI can be trained to identify various “green flags” in an application indicating a job-seeker may be a good fit for a position. Thus, an AI can speed up the process of choosing the right candidate from a potentially long list of applicants.
However, while some assume that AI will also guard against human bias in recruiting and talent acquisition, there is reason to believe that, if left unchecked, AI could actually be prone to accidental bias. The following examples demonstrate how:
Training AI With Poor Data
Different AI hiring tools offer different features. The specific way one tool works might not be the same as another tool.
Generally, though, these programs work by analyzing data regarding the shared characteristics of workers who’ve succeeded in a given industry or company. This data can teach an AI to look for applicants who also possess the qualities that might set them up for success.
However, this method may fail to account for how past biases may have influenced the success of workers in a particular field or organization. For example, if an employer has historically been biased in favor of male workers, they might have been more inclined to award promotions to men.
An AI might review data from the company and erroneously conclude that men are naturally better-suited to certain positions within said company. Thus, it may develop a bias against candidates who aren’t male.
It’s essential to understand how easily this can happen. Even training an AI with a data set featuring more applications from men than from women (or vice versa) could lead to a gender bias.
Those who design AI algorithms must be careful when doing so. Even when their intentions are good, they might accidentally program bias directly into an algorithm without meaning to.
Perhaps someone designing the algorithm for an AI hiring tool programs it so that the tool looks for job applicants with certain educational backgrounds. This could cause the AI to dismiss otherwise qualified candidates who might not have had the same educational opportunities as others due to factors beyond their control.
It’s also worth noting that, because many of our biases are unconscious, an AI developer could include bias in an algorithm without knowing it. AI may not be human, but it was designed by humans. It may thus still be vulnerable to certain human weaknesses.
How a Workplace Discrimination Lawyer Can Help
Although many hope that AI will serve to guard against discrimination in hiring, it’s possible it could cause problems that have yet to be anticipated. Lawmakers in California and throughout the country should prepare accordingly.
In the meantime, if you think you’ve been the victim of discrimination because a potential employer denied you a position for which you were qualified, you may be eligible for compensation, regardless of whether an AI or a human made the ultimate hiring decision.
Learn more about your legal options in these circumstances by discussing your case with an attorney. A Los Angeles workplace discrimination attorney at Rager & Yoon — Employment Lawyers will gladly answer any questions you may have. Get started today by contacting us online or calling us at 310-527-6994.