Decoding discrimination: unpacking the growing impact of algorithmic bias

Posted by

Human intelligence is no longer sufficient for processing the growing demand for data-driven decisions, so industry leaders across the private and public sectors hope artificial intelligence (AI) can supplement.

Not only can AI streamline decision making and process large quantities of information quickly, but the perceived objectivity of an algorithm offers a sense of fairness and accuracy that human methods lack. However, growing evidence shows that artificial intelligence cannot always outsmart the impact of longstanding inequality.

What is algorithmic bias?
The tendency of a computer system to consistently produce data-driven outcomes that are unfavorable for certain groups is known as “algorithmic bias.” Between the ones and zeroes lie the biases of the people—disproportionately male and disproportionately white people—who wrote the code. Additionally, AI software is developed using data sets that encompass many different human biases.

Instead of transcending human bias, algorithms are trained to reward or punish the same traits as society. Softwares then produce inferences that amplify the very discriminatory outcomes they were intended to avoid, but under the cloak of perceived objectivity.

As AI adoption expands across nearly every sector, so does the reach of algorithmic bias. The flawed decisions made using these systems—from underrating women’s credit worthiness and recommending harsher sentences for African American offenders to deciding who is most qualified for a job—have life-altering consequences.

Algorithmic bias in the health care industry
The health care industry’s growing use of artificial intelligence offers the potential for groundbreaking innovation, but poor data training and selection threaten to undermine this life saving potential. A recent study from Science revealed that an AI software used by a major health system made African American patients significantly less likely to receive additional care than their white counterparts. Under this flawed system—which impacted care decisions for millions of patients—just 17.7% of black patients received additional attention, such as extra time with a doctor or nurse. If the disparity was rectified, that number would jump to 46.5% of African American patients.

The program’s flaw was driven by the use of medical expenditures to determine need. To predict which patients would most benefit from additional care, the algorithm was trained to prioritize the patients who receive the most additional spending. In selecting a data point that does not reflect the socioeconomic factors driving existing inequities in care, the algorithm produced recommendations that amplified privilege for white patients.

Until algorithms can be written to better account for the larger challenges driving racial inequality, increased use of AI could lead to greater disparities in the quality of care that protected groups receive.

Moving towards algorithmic accountability
Like other forms of discrimination, algorithmic bias is dangerous because it is hidden. In many cases, the individuals about whom the algorithms issue recommendations aren’t aware that there isn’t a real person on the other side. Even if users are aware, developers can keep users from accessing the source code to avoid compromising their trade secrets.

Federal legislators have felt pressure to address the issue and are responding to increased calls for algorithmic accountability. Multiple bills, including the Algorithmic Accountability Act, have been introduced during the 116th Congress to break new ground on controlling AI growth. Additionally, established regulators, including the Equal Employment Opportunity Commission, are looking into using existing laws to protect Americans from the negative outcomes of this emerging challenge.

Even the White House has weighed in on the future of artificial intelligence, releasing a set of guidelines for federal agencies to use when considering new regulations.

Across the Atlantic, the European Commission (EC) is looking to establish the bloc as a leader in progressive AI policy. Rather than shying away from the technology in its entirety, the EC is taking steps to improve the data sets training the systems. In a recently released white paper outlining the European strategy for artificial intelligence, the Commission recommended improved transparency measures for “high-risk” AI systems and requirements that data for those systems must be unbiased and tested.

Addressing the looming threat of algorithmic bias will require more than government intervention alone. Developers—with their large wells of resources and talent—can take steps to limit the biases contaminating the code and improve transparency while protecting innovation. Neither AI nor systemic bias are likely to go away any time soon, but through cross-sector collaboration and increased awareness, the two need not be linked forever.

To learn more about the relationship between artificial intelligence and discrimination, download National Journal’s algorithmic bias overview.