Technology

Civil Rights Violations in the Face of Technological Change

October 22, 2020  • Dominique Harrison

*originally published November 2019.

Today, communities of color face a battle to uphold civil rights that have been abridged through online platforms. The opportunity, well-being and liberty of Black and Brown Americans is intensified or denied as emerging technologies transform our society.

Civil rights by definition are sets of guaranteed rights that include equal treatment, equal opportunity, and the ability to be free from discrimination. The civil rights movement in the United States was a two decades-long struggle with the goal of enforcing constitutional and legal rights for African Americans. The movement was mostly about getting the federal government to force state governments to follow federal law in voting, job equality, integration of public schools, equality in housing, and equal protection under all laws. Many African Americans suffered violence, discrimination, prejudice, intimidation, and death at the hands of white Americans and white supremacist groups in their fight for these rights. Some history books say the civil rights movement ended in the late 1960s, but I would argue that Black and Brown people are still fighting to end racial discrimination and gain equal rights under the law, to force tech companies to uphold our civil liberties online.

In the age of technological innovation, people of color find themselves embattled with upholding the same fight for equal rights.  This time, the fight is offline and online. One such area is algorithmic bias. Algorithms are quantitative data, a process or set of rules involving mathematical calculations that produces more data that helps people make decisions. Algorithmic bias (machine learning bias) or AI bias, is a systematic error in the coding, collection, or selection of data that produces unintended or unanticipated discriminatory results. Algorithmic bias is perpetuated by data scientists who train algorithms based on patterns found in historical data. These biased results are then used by humans to make decisions with implications that are systematically prejudiced towards communities of color.

A recent example of this is when Amazon had to ditch their recruiting tool because the data used by the system was not rating candidates in a gender-neutral way. The AI models were trained on resume data from the past 10 years, comprised mostly of white men. Thus, Amazon’s recruiting tool taught itself that male candidates were preferable.

These biased outcomes have material consequences for Black and Brown people. Advancements in technologies replicate social inequalities and racial discrimination, causing communities of color to fight against civil rights violations of the past. Black and Brown people are stripped of equitable opportunities in housing, schools, loans, and employment because of biased data. These groups are intentionally being surveilled and the use of their information raise privacy concerns for all.

Black and Brown people are still fighting to end racial discrimination and gain equal rights under the law.

Violations of civil rights online

The internet is a new frontier in which varying populations have the ability to participate in a way much different from the past. So much so that policy regulation of tech is constantly trying to catch up with emerging technologies, given the slow pace of policy-making and rapid innovation. The content of online platforms on Facebook, Google and Amazon are not regulated in the US. The current Federal Communications Commission, charged with regulation of communications, has also returned to a light-touch regulatory approach that gives power and authority to tech companies and media giants. Thus, online platforms in the United States have amassed to largely unchecked, powerful companies that do not have to adhere to the public’s interest. The current regulatory approaches and instruments in the field of tech are not sufficient to promote and safeguard public interests. As a result, the civil liberties and rights of communities of color have been violated online.

There are four areas in which communities of color still face challenges to their civil and human rights through technological innovation: voting, job equality/housing, schools, and law enforcement.

1. Voter suppression and racial targeting

While the 15th Amendment and later the Voting Rights Act of 1965 gave the right and prevented the legal barriers to vote to all citizens regardless of race, color, and gender, the Russian attempt to influence the 2016 elections proved to discourage voting in the Black community. The Russian Internet Research Agency (IRA) used voter suppression tactics on online platforms, such as Twitter, Facebook, Instagram, and YouTube to influence African Americans. This included misinformation, candidate support redirection, and turnout depression – all tactics that vehemently “abridge[d] the right of [Black] citizen[s]… to vote on account of race….” IRAs were able to buy political advertisements on Facebook based on racial target marketing options that infiltrated and exploited the social narratives important to African Americans. This continues to happen in the 2020 election, where Russian disinformation campaigns are targeting African Americans and the Latino community.

According to the Pew Research Center, the Black voter turnout rate declined sharply in 2016, the first time in 20 years. Online platforms played a role in the spread of misinformation at the expense of the electoral power of African Americans and as a result, voter suppression tactics helped to boost Trump’s campaign.

2. Bias in corporate algorithms: employment, housing, and loans

Facebook and other tech companies have come under scrutiny and faced lawsuits for participating in illegal discrimination practices in their advertisement of housing, employment, and loans. Targeted marketing policies and practices used by tech companies — much like racist housing practices used by early renters, developers, and the federal government — permit users to exclude marginalized groups from seeing specific ads. Companies are using targeting advertisement systems to exclude communities of color from seeing ads for homes based on their “ethnic affinity.”

Laws such as Title VII of the Civil Rights Act of 1964 and the Fair Housing Act protects people from discrimination from employers, and when they are renting or buying a home, getting a mortgage, seeking housing assistance, or engaging in other housing-related activities. Yet, current advertisement practices online do contribute to the systematic inequality faced by communities of color in income, housing and wealth.

Communities of color are still fighting for their children to obtain an equal level of education.

Facebook recently settled with a number of civil rights groups for these violations. In the settlement, Facebook alleges that they will prevent discrimination in these areas “by establish[ing] a separate advertising portal for creating housing, employment, and credit (“HEC”) ads on Facebook, Instagram, and Messenger that will [not allow users to block consumers based on]… gender, age, and multicultural affinity”. But research has shown that the process used by Facebook’s ad system can “skew” ad delivery based on demographics in ways that the advertisers do not intend. More research is needed on the role of ad delivery optimization – how ads are delivered to specific users, to understand further implications of discriminatory results based on ad practices online. Much remains to be seen on Facebook’s effort to combat discrimination.

3. Inequity in algorithmic school placement systems

Brown v. Board of Education (1954) was supposed to ensure that African Americans gained an equal level of education by desegregating public schools. The landmark cased argued that education, among other services and resources in public facilities, was not equal. Today, communities of color are still fighting for their children to obtain an equally high level of education that can ensure their child’s future success and opportunities for a better life, no matter where they live or how much money their families make.

Unified enrollment (UE) systems have been developed to create a central method and process to allow families to rank their school preferences in cities that have a lot of choices. The system, which uses algorithms, is supposed to streamline the burdensome application process for families and school districts, and, ideally increase equity. But UE systems may impose unequal placement for underprivileged children in urban communities. In fact, some UE systems have been exposed as being racially biased in the placement of African American and Latino students.

Decision-making artificial intelligence tools used to bring coherence and transparency in the school choice process increase the likelihood of African Americans and Latinos being placed in low-performing schools. In the city of Boston, the UE system used in school placement has been shown to exacerbate segregation while locking out many Black and Latino students from high-performing schools. Such practices used by public schools in large cities prove to be a barrier to equal educational opportunities and the civil rights of children of color.

4. Racism in predictive policing algorithms

Law enforcement has used technology to police communities of color in ways that enable discriminatory consequences – more surveillance, stops, and arrests. Police departments are using predictive analytics and data-driven metrics to inform policing tactics and practices.  Predictive policing, or analytical techniques in law enforcement used to identify potential criminal activity, provides a distorted view of crimes in communities of color. These technologies attempt to forecast when and where future crime may occur, and sometimes who will be involved. That who is primarily Black and Latino young men in poor communities. And as a result, law enforcement focuses on patrolling these communities, and the resulting arrests disproportionally represents African Americans and Latinos – groups at high risk of victimization.

The Chicago Police Department (CPD) has been found to engage in patterns of civil rights violations against communities of color, rife with “systemic endemic structural racism.” The CPD has also used algorithms to predict crimes. As one can imagine, systematic bias embedded in predictive policing data has led to bad predictions that perpetuate racist police practices in Chicago. Algorithmic data was used by the CPD to forecast criminal activity. Historical data based on unlawful practices, such as false police reports, unconstitutional searches, target stops, and arrests have led to biased algorithms that disproportionally rank Black and Brown individuals and their communities as being high risk for crimes. This “dirty data” has shown to produce unjustified police contact and action.

Amazon, IBM, and Microsoft recently vowed not to sell facial recognition software to law enforcement agencies until Congress develops federal policies to govern these technologies. Yet, there are still many law enforcement agencies using facial recognition software as a tool. And, the surveillance of communities of color is a civil rights issue!

There needs to be more diversity in the tech space.

With the use of AI tools in law enforcement, communities of color face an increased risk of not being protected under the law. These communities become over-policed, which undeniably perpetuates the cycle of imprisonment of Black and Brown people.

We shall overcome…someday?!

Current laws do not adequately address biased data that produces discriminatory outcomes for communities of color. While the NAACP, ACLU, and other civil rights organizations are taking steps to hold tech companies accountable, we need more action by the federal government and policymakers. The Federal Trade Commission and the Department of Housing and Urban Development have investigated some of the aforementioned violations. But, how else might other departments within the federal government, like the Department of Justice, step in to enforce civil rights laws in the tech space? In communities across America, city officials are also making decisions based on biased data in law enforcement, city planning, social services, and school placement. How can local government make better decisions in their use of algorithms?

Some representatives have introduced legislation, such as the Algorithmic Accountability Act of 2019, that would direct the FTC to require companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions, and require them to fix the problem. If passed, this would be a great step towards working to reduce decisions based on biased algorithms. Policymakers need to explore existing laws and policy options that address civil rights violations and apply them to the practices used by digital platforms. We need to have more discussions about civil rights laws and its relationship to Section 230 of the Communications Decency Act.

In addition, there needs to be more diversity in the tech space and specifically, the emerging technology industry to address systematic flaws in the production of data. “The diversity problem [in tech]…affects how AI companies work, what products get built, who they are designed to serve, and who benefits from their development.”

Further, while some civil and human rights groups are aware of the five areas that challenge the civil rights of marginalized communities in the online space, more research and attention is needed to assess the ways in which technological innovation impacts racial disparities. The data and algorithms used by tech companies must be made available (also known as “algorithmic transparency”) so that researchers can conduct studies that examine potential pitfalls for communities of color. The allure and attraction of the online world needs to be connected to the discrimination and hate of the real world. And for this, data must be used for good.

Related
Communications
Setting the Course for the Good Society: Reflections on Identifying Goals, Signposts and Metrics for Artificial Intelligence
March 28, 2019 • De’Aira Bryant