A simple definition of predictive policing is “the use of analytics and statistics to forecast crime.” The Brennan Center for Justice at NYU — a non-partisan law and public policy institute that defends democracy and reforms justice — expands on that definition by stating predictive policing is “the use of computer systems to analyze large sets of data, including historical crime data, to help decide where to deploy police or to identify individuals who are purportedly more likely to commit or be a victim of a crime.”
A common criticism of predictive policing is that by relying on historical data to predict future events, bias that may have been present in past policing is baked into future predictions. In this time of heightened awareness of racial injustice, it is imperative to remove racial bias from policing and to increase awareness of how we can do better in the future, rather than repeat mistakes of yesteryear.
One attempt at a predictive policing service was developed by our colleagues at UCLA. PredPol uses machine-learning to analyze 2-5 years of historical police data to train an algorithm to predict future crime. The PredPol website notes that the historical information that is used is crime type, crime location, and crime date, and time. It specifically states that no personally identifiable, demographic, ethnic, or socio-economic is used, eliminating privacy or civil rights violations.
PredPol predictions are delivered to police departments each day in the form of areas to focus on highlighted on google maps. The highlighted areas, or red boxes, represent ‘the highest-risk areas for each day’, and officers are instructed to spend roughly 10% of their shift time patrolling that geographic area. The LAPD began using PredPol in 2011 and it has reportedly also been used by at least 60 other police departments around the nation.
In April 2020, the LAPD announced that it would stop using the AI-driven predictive policing software. PredPol was used by the LAPD for nine years during which time critics had lobbied for police departments to cease using it, noting it is unjust and racist. Opposition groups even gathered academics to speak out against the use of PredPol.
Other police departments have also come under criticism for use of predictive policing. In 2016, a group of 17 organizations including the ACLU, NAACP and Brennan Center for Justice called for law enforcement agencies nationwide to review the use of predictive policing due to racial bias and lack of transparency. The Chicago Police Department ended its predictive policing effort called the ‘Strategic Subject List’ (SSL) late last year.
The City of Santa Cruz stopped using PredPol in 2017. Mayor Justin Cummings now plans to take things a step further, by signing a law banning all predictive policing and facial recognition technology in the City. The motion passed the Santa Cruz City Council Committee in March. Once signed into law, it will be the first ban of its kind in the nation.
As we undergo seismic shifts in society this year, and redefine what is and isn’t acceptable to move toward a more equitable future, we must ask ourselves whether technology that increases efficiency is also facilitating social justice. The concept of policing is currently being deconstructed and reimagined in a way we haven’t experienced in decades, if ever, before. This presents a unique and rare opportunity for us to contribute to the dialogue on how to best serve and protect all persons under police care. Prediction technology can be an enormously powerful tool to predict future events and better prepare first responders for the emergencies they face, and we must ensure it is justice that is baked into its algorithms.