Skip to content
  1. Latest News

Philadelphia Inquirer: Will SEPTA’s New Artificial Intelligence Security System Racially Profile Riders?

From The Philadelphia Inquirer: By: Helayne Drell and Ravi B. Parikh, For The Inquirer

The security system for SEPTA is getting a makeover within the next two months. SEPTA recently announced that starting in January 2023, an artificial intelligence software called ZeroEyes will begin scanning surveillance footage at 300 Philadelphia transit stops to detect the presence of guns. If a firearm is detected, ZeroEyes will trigger an alert to trained security specialists, who then request police dispatch.

With Philadelphia’s rising gun-related incident rates, ZeroEyes is a promising solution to prevent gun-related crime through early police intervention. However, as a research group that studies bias in AI, we know that there is a sordid history of image recognition AI perpetuating racial bias in criminal justice. ZeroEyes carries these same risks. Before implementing the program, SEPTA and other Philadelphia agencies can take proactive, proven steps to minimize the risk of racial bias.

ZeroEyes recognizes patterns associated with guns, rather than the gun itself. It scans for patterns like the shape and color of an object. However, some features it uses could be inherently related to race and socioeconomic status — like skin color, clothing style, and the zip code of the station.

Unfortunately, it’s nearly impossible for algorithm developers to remove all potentially biased features from the AI model. A lack of racial or ethnic diversity in the populations used to train the algorithm could result in its inability to distinguish between guns, cell phones, and other small objects held in the hands of people of color, “baking in” this presumption to the algorithm.

Similar image-based AI algorithms have perpetuated racial bias. In 2016, researchers at ProPublica reported that COMPAS, an algorithm widely used by judges to predict the likelihood of recidivism based on a defendant’s photo, incorrectly labeled Black defendants as “high-risk” at twice the rate of white defendants. When digital cameras became widely available, some detected Asian people as perpetually blinking. Algorithmic bias is common in other fields, including mortgage loans, speech recognition, and our lab’s field — health care. Read more at The Philadelphia Inquirer.