Big Tech’s Culture of Retaliation Can Be Illegal
| Read Time: 4 minutes | Discrimination

Dr. Timnit Gebru’s Firing is a Flash Point in the Wars Over Algorithmic Bias

On October 2, Timnit Gebru—the leader of Google’s Ethical Artificial Intelligence Team—announced in a series of tweets on Wednesday, December 2, 2020, that she had been fired. Google executive Jeff Dean said in a statement on Friday, December 4 that Dr. Gebru had resigned. Dr. Gebru’s team has rallied behind her and started a widespread dialogue about implicit bias in technology and tech giants’ failure to stop it.

Dr. Gebru is is a renowned researcher famous in the field of artificial intelligence ethics, a field she helped create by publishing a groundbreaking paper showing that facial recognition software was less accurate at identifying women and people of color—meaning that the software would likely discriminate by “identifying” the wrong people, at potentially catastrophic cost to them. Facial recognition software is used by many government agencies, including the police, who want to identify suspects, as well as private retailers, who want to identify shoplifters.

Based on the work of Dr. Gebru and other researchers and civil rights activists, there is ample evidence that facial recognition software is biased against women and minorities. In 2015, Google—Dr. Gebru’s former employer—was forced to apologize publicly when its facial recognition software identified Black people as “gorillas.” In 2018, California Democrat and Representative Jimmy Gomez, one of the few Hispanic lawmakers serving in the US House of Representatives, was falsely matched with mugshots of people who had been arrested using Amazon Rekognition software as part of an ACLU study. The ACLU study found that nearly 40 percent of false matches mis-identified people of color. 

Is Algorithmic Bias Illegal?

Needless to say, algorithmic bias is a new frontier in anti-discrimination law, and one that has the tech world on edge. Companies like Google and Amazon make money by selling these products to police, other government agencies, and retailers. They have a vested interest in keeping them on the market—and burying dissenting views.

The law has yet to fully adapt to the implications of algorithms making decisions that used to made by people. Generally speaking, discrimination laws are violated by a showing of intentional discrimination or disparate impact—meaning that those in “protected classes” (e.g., race, gender, religion, sexual orientation, etc.) are disproportionately impacted by a particular practice. A plaintiff in a disparate impact case does not need to show intent, but the defendant can then demonstrate that a legitimate, non-discriminatory reason informed the challenged practice. 

In the context of algorithmic bias, tech companies would likely be able to articulate a number of non-discriminatory reasons for their programming choices. For instance, in the case of a program that analyzes resumes, the algorithm might review resumes of actual employees, compare them to performance reviews, and then predict based on the number of years of relevant experience and the applicant’s college major whether the employee would succeed at the company. These are likely legitimate criteria for an employer to use in a hiring decision—even if they are correlated with protected classes (which college majors are, with certain majors disproportionately filled by female students or students of certain racial backgrounds).

In 2018, Amazon scrapped a recruiting algorithm that, in the words of a press article, tried to “automatically return the best candidates out of a pool of applicant resumes.” Amazon found that the program “would down-rank resumes when it included the word ‘women’s,’ and even two women’s colleges. It would also give preference to resumes that contained what Reuters called ‘masculine language,’ or strong verbs like ‘executed’ or ‘captured.'” The program rejected female applicants’ resumes at disproportionate rates based on these biased assumptions.

Is this intentional discrimination? What if an Amazon employee alerted the company to the algorithm’s bias, and Amazon continued to use it—instead of scrapping it amidst public outcry? One could certainly argue that Amazon would be recklessly condoning gender discrimination if it continued to use the algorithm it knew to be biased. This is especially true in states like California, where failure to prevent discrimination is a violation of state employment laws. 

Big Tech’s Culture of Retaliation

Dr. Gebru’s termination is the latest in a string of firings at Google and other tech companies targeting employees who speak out on diversity, labor, or ethics issues. In many cases these firings are illegal, even if the employee who is fired is not a member of a protected class.

In addition to prohibiting discrimination on the basis of protected characteristics such as race, gender, or disability both California and Federal law protect against so-called “associational discrimination,” which is discrimination against workers who are not members of a particular protected category, but who have a relationship with protected persons. These laws not only protect employees who are discriminated on because of friendships or relationships with members of protected classes, but also employees who are discriminated against for advocating for others who fall within the protected class.

For example, in Johnson v. Univ. of Cincinnati, a federal appellate court held that the vice president of a university could bring claims for discrimination where he claimed to have been fired because of his advocacy on behalf of minority students. 215 F.3d 561, 566 (6th Cir. 2000). California law provides even greater protection for employees.

California and Federal law also provide protections for employees who are terminated because they complained about, or refused to participate in, conduct that they believe violates the law. In Dr. Gebru’s case, she could have reasonably believed that Google’s AI products violate civil rights statutes, privacy laws, or any number of other legal mandates. If she did, her complaints would be “protected,” and retaliation would be illegal. 

Experienced Discrimination Law Attorneys

With our background in complex commercial litigation against giant companies like Facebook, Amazon, and Google, and our commitment to employee rights litigation, we enthusiastically represent plaintiffs who believe they have been wronged by algorithms or who have been retaliated against for speaking up against implicit or algorithmic bias.

If you believe that you have been discriminated or retaliated against because you are a member of a protected class, because of your association with someone in a protected class, or because you spoke up about illegal conduct, our experienced, award-winning employment attorneys can help.

Author Photo

Julian Burns King graduated with honors from Harvard Law School and founded King & Siegel in 2018. As head of the Firm’s discrimination and harassment practice areas, she champions the rights of working parents and victims of workplace discrimination and harassment. She has been recognized as a “Rising Star” by Super Lawyers annually since 2018 and has recovered tens of millions of dollars on behalf of her clients.

Read More Articles by Julian Burns King