Connect with us

Tech News

Cyber Thought Leader Khari Johnson Calls for “Algorithmic Reparation” and Racial Justice in Artificial Intelligence (AI)

FORMS OF AUTOMATION such as artificial intelligence increasingly inform decisions about who gets hired, is arrested, or receives health care. Examples from around the world articulate that the technology can be used to exclude, control, or oppress people and reinforce historic systems of inequality that predate AI.


Now teams of sociologists and computer science researchers say the builders and deployers of AI models should consider race more explicitly, by leaning on concepts such as critical race theory and intersectionality.
Critical race theory is a method of examining the impact of race and power first developed by legal scholars in the 1970s that grew into an intellectual movement influencing fields including education, ethnic studies, and sociology. Intersectionality acknowledges that people from different backgrounds experience the world in different ways based on their race, gender, class, or other forms of identity.


One approach presented before the American Sociological Association earlier this year coins the term algorithmic reparation. In a paper published in Big Data & Society, the authors describe algorithmic reparation as combining intersectionality and reparative practices “with the goal of recognizing and rectifying structural inequality.”
Reparative algorithms prioritize protecting groups that have historically experienced discrimination and directing resources to marginalized communities that often lack the resources to fight powerful interests.


“Algorithms are animated by data, data comes from people, people make up society, and society is unequal,” the paper reads. “Algorithms thus arc towards existing patterns of power and privilege, marginalization, and disadvantage.”
The three authors from the Humanizing Machine Intelligence project at Australian National University and Harvard’s Berkman Klein Center for Internet & Society argue that efforts to make machine learning more fair have fallen short because they assume that we live in a meritocratic society and put numerical measurements of fairness over equity and justice. The authors say reparative algorithms can help determine if an AI model should be deployed or dismantled. Other recent papers offer similar concerns about the way researchers have interpreted algorithmic fairness until now.
The wider AI research community is taking note. The Fairness, Accountability, and Transparency conference recently said it will host a workshop focused on how to critique and rethink fairness, accountability, and transparency in machine learning. The University of Michigan will host an algorithmic reparation workshop in September 2022.


Still, researchers acknowledge that making reparative algorithms a reality could be an uphill battle against institutional, legal, and social barriers akin to those faced by critical race theory in education and affirmative action in hiring.
Critical race theory has become a hot-button political issue, often wielded in ways that have little to do with the theory itself. Virginia governor-elect Glenn Youngkin attacked critical race theory as part of his successful campaign this fall. In Tennessee, an anti-critical-race-theory law led to criticism of books about the desegregation of US schools. By contrast, California governor Gavin Newsom this fall signed a law to make ethnic studies a high school graduation requirement by 2025. A recent study found that ethnic studies classes improved graduation and school attendance rates in San Francisco. At the same time, the 2020 Census found the US is more racially and ethnically diverse than ever. The share of Americans who identify as “white,” has declined, and the share who identify as white and another racial group has increased.

Advertisement


Supporters of algorithmic reparation suggest taking lessons from curation professionals such as librarians, who’ve had to consider how to ethically collect data about people and what should be included in libraries. They propose considering not just whether the performance of an AI model is deemed fair or good but whether it shifts power.
The suggestions echo earlier recommendations by former Google AI researcher Timnit Gebru, who in a 2019 paper encouraged machine learning practitioners to consider how archivists and library sciences dealt with issues involving ethics, inclusivity, and power. Gebru says Google fired her in late 2020, and recently launched a distributed AI research center. A critical analysis concluded that Google subjected Gebru to a pattern of abuse historically aimed at Black women in professional environments. Authors of that analysis also urged computer scientists to look for patterns in history and society in addition to data.


Earlier this year, five US senators urged Google to hire an independent auditor to evaluate the impact of racism on Google’s products and workplace. Google did not respond to the letter.
In 2019, four Google AI researchers argued the field of responsible AI needs critical race theory because most work in the field doesn’t account for the socially constructed aspect of race or recognize the influence of history on data sets that are collected.


“We emphasize that data collection and annotation efforts must be grounded in the social and historical contexts of racial classification and racial category formation,” the paper reads. “To oversimplify is to do violence, or even more, to reinscribe violence on communities that already experience structural violence.”


Lead author Alex Hanna is one of the first sociologists hired by Google and lead author of the paper. She was a vocal critic of Google executives in the wake of Gebru’s departure. Hanna says she appreciates that critical race theory centers race in conversations about what’s fair or ethical and can help reveal historical patterns of oppression. Since then, Hanna coauthored a paper also published in Big Data & Society that confronts how facial recognition technology reinforces constructs of gender and race that date back to colonialism.


In late 2020, Margaret Mitchell, who with Gebru led the Ethical AI team at Google, said the company was beginning to use critical race theory to help decide what’s fair or ethical. Mitchell was fired in February. A Google spokesperson says critical race theory is part of the review process for AI research.

Advertisement


Another paper, by Rashida Richardson, an assistant professor of law and political science at Northeastern University, to be published next year contends that you cannot think of AI in the US without acknowledging the influence of racial segregation. The legacy of laws and social norms to control, exclude, and otherwise oppress Black people is too influential. Richardson is also an adviser to the White House Office of Science and Technology Policy.


For example, studies have found that algorithms used to screen apartment renters and mortgage applicants disproportionately disadvantage Black people. Richardson says it’s essential to remember that federal housing policy explicitly required racial segregation until the passage of civil rights laws in the 1960s. The government also colluded with developers and homeowners to deny opportunities to people of color and keep racial groups apart. She says segregation enabled “cartel-like behavior” among white people in homeowners associations, school boards, and unions. In turn, segregated housing practices compound problems or privilege related to education or generational wealth.


Historical patterns of segregation have poisoned the data on which many algorithms are built, Richardson says, such as for classifying what’s a “good” school or attitudes about policing Brown and Black neighborhoods.
“Racial segregation has played a central evolutionary role in the reproduction and amplification of racial stratification in data-driven technologies and applications. Racial segregation also constrains conceptualization of algorithmic bias problems and relevant interventions,” she wrote. “When the impact of racial segregation is ignored, issues of racial inequality appear as naturally occurring phenomena, rather than byproducts of specific policies, practices, social norms, and behaviors.”


As a solution, Richardson believes AI can benefit from adopting principles of transformative justice such as including victims and impacted communities in conversations about how to build and design AI models and make repairing harm part of processes. Similarly, evaluations of AI audits and algorithmic impact assessments carried out in the past year conclude that legal frameworks for regulating AI typically fail to include the voices of communities impacted by algorithms.


Richardson’s writing comes at a time when the White House is considering how to address the ways AI can harm people. Elsewhere in Washington, DC, members of Congress are working on legislation that would require businesses to regularly report summaries of algorithm impact assessments to the Federal Trade Commission and create a registry of systems critical to human lives. A recent FTC announcement hints the agency will establish rules to regulate discriminatory algorithms in 2022.

Advertisement


Some local leaders aren’t waiting for Congress or the FTC to act. Earlier this month, the attorney general of the District of Columbia introduced the Stop Discrimination by Algorithms Act that would require audits and outline rules for algorithms used in employment, housing, or credit.
Updated, 12-25-21, 10:25am ET: An earlier version of this article did not include Rashida Richardson’s academic affiliation.

Continue Reading
Advertisement