New Directions | A Convenient Excuse: Tech's Discrimination Problem

Electronic communications have changed dramatically since 1993, when H-Net's first email discussion groups (or "lists") began to help historians connect and share resources online. Since then, not only academic life, but life in general has become mediated in countless ways through screens and algorithms that have changed how people work, learn, shop, make friends, and even remember their personal and collective pasts. While many of these changes have greatly facilitated research and communication, lawmakers and the public are starting to realize the potential harm that pervasive technology can have in our lives. Doug Priest, Digital Managing Editor at Townsquare Media who holds a PhD in history from Michigan State University, reviews three recent titles by scholars who have pointed to the ways that discrimination on the basis of race, gender, and class persists in tech. Readers will find a list of additional readings at the end of the essay. We thank former Book Channel editor Adrienne Tyrey for her work in commissioning and helping to edit this essay. --Yelena Kalinsky, Book Channel Editor


On April 9 and 10, 2018, Facebook founder and CEO Mark Zuckerberg sat down before members of Congress to answer questions about the company’s data collecting and sharing practices. The hearing was occasioned by the revelation that Cambridge Analytica gained access to data on some 87 million Facebook users. As I sat watching the proceedings, I realized that, aside from exposing the embarrassing lack of tech knowledge on the part of our congresspeople, the proceedings revealed that our public discourse lacks either the ability or the will to grapple with the relationship between decisions and outcomes when it comes to the technologies that we use to gather, find, and share information.

Even as Zuckerberg dodged complicated questions with promises that his team would follow up with the panelists, it became apparent that decisions had been made that led to the data being shared. The passive voice serves its purpose here. Responsibility was somehow distant from the discussion even as it seemed, paradoxically, plainly obvious to everyone observing that Facebook, with Zuckerberg at the helm, was clearly at fault. Fault and responsibility seem muddy in the tech world because the decisions made both by humans designing our technology and algorithms governing its behavior in real time are obscured from the average user’s experience. As users, we lack the ability to assign fault despite articulating problems with our technology with ever-increasing clarity. Those problems are not limited to data privacy; they extend to social problems created and reinforced by tech, and the causes of those problems have been similarly obscured.

Over the past several years, scholars have taken the topic of technology and discrimination more and more seriously. Although it is not a new topic, a critical mass of available data, scholarly will, and social momentum has made the literature timely, salient, and increasingly plentiful. That’s a good thing, because in this moment we need not merely critique, but deep understanding of both the human and algorithmic reasons our tech behaves as it does. Without contending with both, we are doomed to the same kind of ineffectual questioning on offer from Congress.

Marie Hicks (who uses the pronouns they/them) has taken it upon themself to challenge the notion that computer science has always been a male-dominated field and explain the historical decisions that led to it becoming so. Programmed Inequality: How Britain Discarded Woman Technologists and Lost Its Edge in Computing (MIT Press, 2017) expertly tracks the history of women in computing in Britain and provides us with a blueprint for understanding the gendered nature of labor in the computing industry. Their dominance in the field during the Second World War notwithstanding, women continued to face challenges in the decades since, as the British Civil Service showed preference to men by giving them promotions and higher wages while shutting women out of management positions. Meanwhile, the critical work that women had performed operating computers during the war lost value in the public’s mind. This final point is of particular importance in understanding what Hicks calls “the fiction of full automation”—the idea that despite an increase in the computing labor force, that labor was “obscured by layers of media representation. People did lose their jobs as they were replaced by technological systems, but those systems precipitated a growing demand for computer labor” (Hicks, p. 111). Men, not automation, replaced women. Even as early as the 1970s, the gulf between decisioning-making, labor, and the public was widening.

Hicks’s chronicle of the mass expulsion of women from the British computing industry nevertheless offers a glimmer of hope. In establishing women in tech as a force in the industry’s earliest days, it completely undermines the common misconception that computer science has always been a male-dominated field. By painting a more accurate picture of the past, Hicks dares us to envision a more equal future. Tech’s present, alas, is anything but.

Safiya Umoja Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press, 2018) brilliantly catalogs the way Google searches reinforce negative stereotypes of Black people. From overt sexualization and fetishization to defining Black bodies as “unprofessional,” there can be little argument that Google search has not adopted and repeated common racist tropes (Noble, p. 83). Google has addressed some of these problems on a piecemeal basis. For example, early in her work Noble cites the 2016 case where a Google Maps search for “n---- house” returned results for the White House in Washington, DC. Google “fixed” that particular case by removing the result, but the underlying problem, the algorithm that allows less sensational (and therefore less likely to be manually addressed) yet equally damaging results to propagate, persisted (Noble, p. 9).

One of Noble’s most important points comes about a third of the way through her study. She recounts how “many people say to me, ‘But tech companies don’t mean to be racist; that’s not their intent.’” To which Noble responds, “intent is not particularly important” (Noble, p. 90). Her statement cuts directly to the point: the problem exists. Whether or not anyone intended for the technology to produce the results it did is less important than the need to shift focus, intent, and action to producing different, more equitable results. The task before us is to understand the problem and solve it. To wit, Noble emphasizes that search engines are monetized—commodifying both information and the people seeking it. Any solution must confront the problem of profit derived from damaging stereotypes. The idea that the algorithms reinforcing those stereotypes were not originally designed to do so is a red herring, because it relies on the false assumption that a lack of intent to create the problem implies a lack of responsibility to fix it. “We designed it to make money, not to discriminate” is not an ethical “get out of jail free” card when discrimination is the outcome.

Virginia Eubanks’s Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin's Press, 2018) rounds out this discussion with a consideration of tech’s connection to economic class. Eubanks takes a more journalistic approach than either Hicks or Noble, injecting her account with the personal narratives of the victims of what she calls the “digital poorhouse.” These stories, based on a series of interviews Eubanks conducted between 2014 and 2016, effectively communicate the book’s main argument: our technology harms the poor and obfuscates that fact in a way that makes it easier to continue that injustice (Eubanks, p. 13). Technology has provided an illusion of objectivity that made it easy for people with economic privilege to ignore its repercussions. It is a damning thesis and one well substantiated in the pages of this volume. For example, when Indiana automated its public assistance programs to “maximize efficiency and eliminate fraud,” the system excluded numerous people who, according to the law, should have been included. It made potential recipients jump through hoops, like filing forms electronically even when internet access was limited or nonexistent. Some forms had to be faxed. Those least able to meet the additional demands of the new system—the most vulnerable for reasons of poverty, disability, and geographic remoteness—were most in need of its benefits and least able to access them. When the number of recipients decreased, the Indiana state government celebrated the increased efficiency and success of getting “undeserving” people off the books. It was not until the disenfranchised Indianans and their supporters mounted sustained, organized political resistance that any changes were made, but Eubanks reports that many people remain without assistance due to remnants of the automated system that have left barriers to assistance in place.

Hicks, Noble, and Eubanks each describe the outcomes of discrimination in tech and all three are concerned that our technology and assumptions about it obscure that very discrimination. This obscurity creates a self-reinforcing loop in which little is done to address the problem, because its origin is read as incidental, the problem grows worse, and still nothing is done.

Readers may remember the story of Tay, Microsoft’s Twitter bot from 2016. It was a machine-learning experiment to see how a computer program could learn to interact with humans using the massive data set that is Twitter. Unsurprisingly, the bot spiraled into overt racism almost immediately and had to be taken offline after just one day.

Whoever was making the decisions at Microsoft that day had the sense to turn the bot off, a failed experiment. What if, instead, they had simply left it running, forever tweeting explicitly racist statements into the world while claiming it was simply the algorithm doing its job? It would have been, at best, a PR disaster for Microsoft to take no action at all. Google search is not Tay the Twitter bot, but it is no more natural or neutral an algorithm than Tay, either.

One of the most insidious aspects of the way technology reinforces biases in gender, race, and class is that it does so in ways that can feel neutral and natural to many people. It makes sense when viewing technology from the perspective of a user or relatively passive observer. Microsoft didn’t “mean” for Tay to be racist any more than Google “meant” the same for its search results. All three studies stand out for shedding light on the details of how and why these kinds of biases exists. Even among people who recognize tech’s problems, it is one thing to say our technology reinforces and reifies existing power structures, but it is altogether another to show the inner workings of how this happened in the past and continues to happen today. This property of our discourse about technology lets Google search results seem like objective answers to questions and allows us to de-historicize the male-dominated computer science industry, making the inherent biases in our technology seem like forces of nature. The books under discussion here respond clearly and forcefully to this naturalization: context matters. Their evidence and arguments provide us with powerful intellectual antibodies to fight against any impulse to further naturalize our technology.

Hicks, Eubanks, and Noble’s work is not merely diagnostic, but also prescriptive. They make the strong case that intentional action on the part of the tech community and policymakers is essential and that systemic changes are necessary. To guide those changes, these authors provide us with a sophisticated understanding of the problem. Hicks’s efforts to contextualize the British computing industry serves as a radical critique of the tech world’s “great man” theory, which presumes that advances result from the skill and will of a few talented individuals. By examining the processes and systems that created the modern computing industry, our technology, and their consequences, Hicks undermines the tech world’s mythological origin story based on individual prowess and merit. In summarizing their methodology, Hicks concludes that “discussing women as a class of labor in computing, rather than searching for exceptional women, fundamentally changes the narrative” (Hicks, p. 233). By shifting the historical focus away from individuals, Hicks peels back a layer of the aforementioned obfuscation of responsibility. Instead of getting caught up in a paralyzing cycle of blame without responsibility, or even highlighting the legitimately impressive contributions of a few select women, Programmed Inequality gives us a model for how to critique a much more complex technological landscape fraught with issues of labor, culture, and public policy to identify points of failure at each level.

With respect to illuminating the intersection of technology with society and economics, Noble makes a similar point: “tracing these historical constructions of race and gender offline provides more information about the context in which technological objects such as commercial search engines function as an expression of a series of social, political and economic relations—relations often obscured and normalized in technological practices” (Noble, p. 60). The average person searching for an answer on Google has almost no access to the algorithms that determine the results that the system returns. Laying those processes bare in the context of larger societal forces is at the core of Noble’s project. It is not merely a need to critique Google’s search results as discriminatory, but also the necessity of contextualizing and de-normalizing those results that animates Noble’s work.

Eubanks suggests a big-data Hippocratic Oath of sorts as a provisional solution to technological discrimination. Her recommendation goes beyond “doing no harm”—it is a call for people working in tech to contextualize their work with an understanding of the social landscape and the dominant systems of power and inequality in force when they build and make public technology. “I will not,” Eubanks’s third point reads, “use my technical knowledge to compound the disadvantage created by historic patterns of racism, classism, able-ism, sexism, homophobia, xenophobia, transphobia, religious intolerance, and other forms of oppression” (Eubanks, pp. 212-13). As Noble also makes clear, negative intent is not operative. To avoid this problem, those with technical knowledge must take positive steps to ensure discrimination does not happen as a result of their technology’s use.

Ultimately, changing the discriminatory outcomes of our tech is about more than a tweak to an algorithm or a mission statement. It is about more than changing business models. It is even about more than hiring people of color and white women into prominent positions in the industry. Yes, all of those things would be steps in the right direction and are necessary elements of the changes that need to take place. Ultimately, however, we must shift our narratives about technology’s role in society, and our tech industry must become more self-reflective, open itself to criticism, increase transparency, and finally change based on real-world outcomes. This alone will not solve tech’s problems, but it will create the conditions in which these problems can be addressed by challenging and undermining the temptation to naturalize outcomes. Indeed, as Noble notes, it will almost certainly require strictly enforced public policy that helps to remove layers of obfuscation (Noble, p. 160) and allow us to see and reckon with the critical details of our tech’s discrimination. Even if it were true that no one intended for our technology to lead us to our current state of technologic discrimination, we will not be able to move beyond it without the deliberate effort of individuals, governments, and industries. What successes we do find will need constant maintenance and vigilance on the part of those committed to equality in tech.

Perhaps ironically, I first came across this literature on Twitter. The algorithm must have been working in my favor that day. Such is the allure of all this technology in the first place. When it does the things you want it to do, it seems indispensable. It is for just that reason this literature is, perhaps, the most important you’ll read this year.


Recommended Readings

Domingos, Pedro. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. New York: Basic Books, 2015.

Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press, 2018.

Ferguson, Andrew Guthrie. The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. New York: New York University Press, 2017.

Finn, Ed. What Algorithms Want: Imagination in the Age of Computing. Cambridge, MA: MIT Press, 2017.

Hicks, Marie. Programmed Inequality: How Britain Discarded Women Technologist and Lost Its Edge in Computing. Cambridge, MA: MIT Press, 2017.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University, 2018.

O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London: Penguin Books, 2016.

Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press, 2015.

Tufekci, Zeynep. Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven, CT: Yale University Press, 2017.

Wachter-Boettcher, Sara. Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech. New York: W.W. Norton & Co., 2017.