Featured book: More than a Glitch

When technology reinforces inequality, it’s not just a glitch—it’s a signal that we need to redesign our systems to create a more equitable world

The word “glitch” implies an incidental error, as easy to patch up as it is to identify. But what if racism, sexism, and ableism aren’t just bugs in mostly functional machinery: what if they’re coded into the system itself? In More than a Glitch, Meredith Broussard—a data scientist and one of the few Black female researchers in artificial intelligence—demonstrates how neutrality in tech is a myth and why algorithms need to be held accountable. 

Author Meredith Broussard.

Broussard argues that, even when technologies are designed with good intentions, fallible humans develop programs that can result in devastating consequences. 

More than a Glitch provides a daunting range of examples: from mortgage-approval algorithms that encourage discriminatory lending to the dangerous feedback loops that occur when medical diagnostic algorithms are trained on non-diverse data. Broussard describes how certain automated soap dispensers only register lighter skin tones; or how a Black man was arrested in Detroit when facial recognition software incorrectly flagged him as a shoplifter—an instance that is all-too commonplace with similar AI programs.

“Unfortunately, there isn’t a magic switch that we can flip to make technology work for everyone, everywhere,” Broussard writes. “Eliminating racism and ableism and gender bias from technological systems is a complex problem.” Technology, according to Broussard, simply predicts the status quo and our existing reality. Thus, tech is racist and sexist and ableist—because it reflects the world around us.

Broussard argues that the solution isn’t to make omnipresent tech more inclusive, but to root out the algorithms that target certain demographics as “other” to begin with.

While that task sounds monumental, she sees hope in recent sweeping changes; in the EU, a new proposed piece of legislation calls for high-risk AI technology to be closely controlled and regulated. While the technology used to unlock your phone may be low-risk, facial recognition in policing could be considered high-risk. The proposed legislation demands that anyone using high-risk AI must demonstrate that it is not discriminatory.

Change may be slow, but it is possible. “By adopting a more critical view of technology, and by being choosier about the tech we allow into our lives and our society, we can employ technology to stop reproducing the world as it is, and get us closer to a world that is truly more just,” Broussard writes.

More than a Glitch in the media:

Learn more about More than a Glitch