Discrimination Claims Based on Algorithms
| Read Time: 4 minutes | Discrimination

Computer and artificial intelligence algorithms are used widely to reduce workloads for humans and produce fast, reliable results. They are programmed to do tasks based on past and predictable patterns. They are considered objective, impartial decision-makers that filter the most relevant sources for your google searches, reorder the content of your social media, and determine your credit score, among other things.

But algorithms are programmed by humans. They rely on past data to make future predictions, which means they are rarely free from bias and can reinforce past stereotypes and patterns. Thus, algorithms can have negative effects on employment opportunities and can discriminate, even if they appear “neutral.”

The Ways Algorithmic Bias Can Affect Employment

Hiring

Algorithms discriminate in hiring in various ways.

First, algorithms sometimes discriminate in showing job advertisements to prospective candidates. For example, a pilot job may be advertised to men more than women, or a tech job posting may be shown to younger workers more often than older workers. This affects disadvantaged groups’ ability to find and apply for jobs they may want.

A recent study found that Facebook’s advertising algorithm likely discriminates based on gender. In the study, researchers submitted an ad for a mechanic position. Facebook’s advertising algorithm showed it to men 96% of the time. When they submitted an ad for a preschool nurse, the algorithm showed it to women 95% of the time. Is this algorithm gender-neutral? Of course not.

Second, algorithms discriminate when employers use them to screen candidates. Some employers compare candidates with existing workforce data to determine “culture fit” and make predictions about job performance. These algorithms may discriminate based on disparities in the existing workforce.

Another way algorithms discriminate is by using automated resume screening tools. These tools “rank” terms and assign resumes with a score. Again, the terms that employers choose, or the sample resumes processed by AI, impact scoring. This means that algorithms mirror biases already present in the workforce. For instance, if a company has never promoted women into a particular role, it is likely to choose sample resumes and teach the algorithm to prioritize male experiences and associated terms on resumes.

Similarly, for example, an algorithm that ranks graduates of top-ranking universities as having more value than graduates of other schools can have the biased result of disadvantaging women and minorities that have attended historically minority or women’s colleges or come from lower-income backgrounds.

Setting Pay & Salary

Many employers also use AI to set salaries and wages. Algorithms that set wages look at data about employees’ skills, value, and performance to determine appropriate compensation.

Of course, the problem is that these algorithms can be biased. If employers input selective or biased data into the algorithm (for example, data showing that a certain employee has a lower pay than another equally qualified employee), these biases impact suggestions for future pay.

While algorithms can be useful in reducing the gender and race wage gaps, they also create inequality if the employer is not careful. For instance, we brought a case against a major company that based salaries on past wages, leading to inequality between female employees and male employees. (Paying wages based on past wages is known to increase the gender wage gap.)

Similarly, algorithms may recommend higher pay based on educational benchmarks or other “objective” metrics that can be proxies for race or gender-based disparities.

Promotions

The same types of problems arise when algorithms make promotion decisions. Another problem for promotion (and hiring) decisions, however, is that algorithms are often trusted to be neutral and objective when they are not. This can lead people to replace their own intuition and judgment with the computer’s recommendation.

For example, an employer might choose to promote a young worker who was recommended as a 99% match over a different, older worker known to be an extremely hard worker. The computer may recommend the young worker because of past, prejudiced data suggesting, for example, that younger workers tend to be more adaptable than older workers.

Layoffs

Using algorithms to predict employee longevity or absenteeism, and then using that data to make layoff decisions, can also be discriminatory. For example, maybe data shows that women in their late 20s or early 30s are more likely to take leaves of absence (because they are pregnant at higher rates than other workers) and uses this prediction to mark them for layoffs. Other times, employees with disabilities are penalized for past absences and predictions about future absences.

Employers increasingly look at this type of data when making layoff decisions. Our pregnancy and disability discrimination attorneys have had two large settlements involving just this situation. In one, a pregnant employee was given a low “longevity” score and chosen for layoff on that basis. In another, a disabled employee was given a score predicting significant future absences and was also selected for layoff.

In both situations, the prediction that the employee would miss work cannot be a legal basis for firing them. These absences—if they end up happening at all—are “job-protected” and the employer cannot discriminate based on them.

Conclusion

Algorithms are only as good as the data they analyze. In some situations, existing law protects workers and provides a remedy if they are discriminated against. In other situations, the law has not caught up with technological advances.

Talk to an Experienced Employment Attorney Today

At King & Siegel LLP, we have helped hundreds of workers stand up to their employers. If you believe you have been adversely affected by algorithm bias at your job, we are here to help.

Need legal help? We provide free, confidential consultations to California workers. You should contact us as soon as possible to make sure your claim is still within the time limits set by law. Contact us today through our website or give us a call at (213) 465-4802 to schedule a free consultation.

Author Photo

Julian Burns King graduated with honors from Harvard Law School and founded King & Siegel in 2018. As head of the Firm’s discrimination and harassment practice areas, she champions the rights of working parents and victims of workplace discrimination and harassment. She has been recognized as a “Rising Star” by Super Lawyers annually since 2018 and has recovered tens of millions of dollars on behalf of her clients.

Read More Articles by Julian Burns King