There have been many events this year, including bold claims of a fake information breach. Industry commentators argued that the GPT-3 language generation model may have received “artificial generic information,” while others suggested the DeepMind protein complex algorithm – Alphafold – and ability to “change biology”. While the basis of these claims is thinner than the effective headlines, this has done little to reduce passion across the industry, whose profits and reputation depend on AI diversification.
It was against this background that Google fired Timnit Gebru, our dear friend and colleague, and leader in the field of artificial intelligence. She is also one of the few Black women in AI research and a freelance advocate for bringing more BIPOCs, women, and non-Westerners into the field. By any measure, she did an excellent job in the work that Google hired her to do, including highlighting racial and gender differences in facial analysis technologies and developing reporting guidelines for datasets and AI modules. Ironically, this and its vocal appeal to those underrepresented in AI research are also reasons, she said, the company fired her. According to Gebru, after requesting that she and her colleagues pull out a research paper that was critical of large (profitable) AI systems, Google Research told its team that they had accepted his role, which despite the fact that she did not retire. (Google declined to comment for this story.)
Google’s awful handling of Gebru reveals a double whammy in AI search. The field is dominated by male employees, especially white males, and is managed and funded primarily by major industry players – Microsoft, Facebook, Amazon, IBM, and yes, Google. With the burning of Gebru, the civic politics that fueled the young effort to build the necessary guards around AI was torn apart, raising questions about the racial homosexuality of AI workers and the ineffectiveness of corporate diversity programs to the middle of the debate. But this position has also made clear – no matter how earnest a company may seem – that corporate-funded research cannot be separated from the reality of power, and revenue streams. and capital.
This should concern us all. With the diversification of AI into areas such as health care, criminal justice, and education, researchers and advocates are raising urgent concerns. These systems make decisions that directly shape life, while at the same time being structured in structured groups to reinforce the history of racial discrimination. AI systems also place power in the hands of those who design and use them, while keeping the load (and burden) behind a complex computing veneer. The risks are deep, and the motivations for decision making are steep.
The current crisis highlights the structural barriers that limit our ability to build effective defenses around AI systems. This is particularly important given that the numbers vulnerable to prejudice and bias from AI predictions and decisions are predominantly BIPOC people, women, minorities and gender minorities, and poor people – those who have perpetuated structural discrimination. Here we have a clear racial divide between the beneficiaries – the corporations and the researchers and mostly white male developers – and those most likely to be hurt.
Take facial recognition technologies, for example, which have been proven to “recognize” people with dark skin more often than those with lighter skin. This is a scary one. But these racial “mistakes” are not the only problems with face recognition. Tawana Petty, organizational director at Data for Black Lives, points out that these systems are used disproportionately in black neighborhoods and cities, and cities that have been successful in ‘banning and pushing back against the use of normally white face recognition.
Without an independent, critical study that focuses on the perspectives and experiences of people who are vulnerable to these technologies, our ability to understand and question the overwhelming claims made by business is severely questionable. Google ‘s handling of Gebru is becoming increasingly clear where the company’ s priorities seem to lie when urgent work pushes back on its business incentives. This makes it virtually impossible to ensure that AI systems are accountable to the most vulnerable people from their damage.
Studies of the industry are further undermined by the close links between technical firms and seemingly independent academic institutions. Researchers from corporate and academia publish papers together and rub angles at the same conferences, with some researchers even holding parallel positions at technical companies and universities. This draws close to the boundary between academic and physical research and empties out the motivations that come with writing such work. It also makes the two groups look remarkably similar – AI research in academia suffers from the same same-sex issues of race and gender as their physical peers. In addition, the major computer science departments receive large amounts of Big Tech research funding. We only need to look at Big Tobacco and Big Oil for troublesome templates that show just the impact of a public understanding of complex scientific issues that large companies can apply when creating knowledge left in their hands.