Since its formal introduction in 2011, “predictive policing” has been more controversial than not. The media touted it as a “revolutionary innovation” capable of “stopping crime before it starts” (LA Times). However, forces who have employed the technology – as well as researchers at the Royal United Services Institute – fear that its use could lead to unintentional bias, especially towards protected characteristics such as race, sexuality and age. At the same time, private citizens fear privacy implications of so much data being collected by police.
The idea of using historical data to predict future crimes has been around for decades: from 1993 – 1998 in New York City, homicides dropped by 67% and burglaries declined by 50% (National Speech and Debate Association) when this technique was employed. But as technology has evolved in the years since, so has the sophistication of the tools which police can use to track, analyse and forecast crime.
In predictive policing, police forces collect information about crime types, locations, and dates, as well as information on arrest, convictions and other criminal information about those involved. They use that data to forecast what, where, and when future crimes will take place. On the surface, this type of law enforcement appears to be taken directly out of the Hollywood movie Minority Report: by using historic crime data to make inferences about the future probability of crime, predictive policing is vulnerable to engineering the future that it “predicted.” It is more directly a window into the past than a glimpse of the future. What on the surface appears to be effective, efficient, and fair use of reliable, objective data sources is upon closer examination, more complex and potentially problematic.
When one digs a bit deeper into the sources of data, it becomes clear that not all data is created equal. Bias in policing is not new – just look at recent catalysts for the Black Lives Matter movement. If the data that is used to predict the next crime comes solely from biased policing sources, those biases will become amplified by the algorithm. This can affect how an officer sees their community and influence decisions that align with the prediction, making it reality. The predictions can taint, for example decisions about whether to make an arrest or what degree of force is warranted.
Predictive policing has also been shown to infringe on individual rights through the use of surveillance tools and classifying someone as a “pre criminal’ even though no crime has yet been committed (National Speech & Debate Association). Additionally there is the problem of how the data is collected or transferred, and where it is stored. Was it collected with appropriate permissions and concerns for individual rights and privacy? Where is it kept and who has access to it?
Even if no police action is ever taken as a result of a so-called “pre-criminal” status, being identified as such could have severe negative consequences if this information was leaked due to a hack or other breach.
Predictive policing depends on big data, and big data presents security challenges and serious privacy risks for private citizens. Worldwide, several cases of misuse and lack of transparency in predictive police have already been filed.
A survey conducted by the Police Executive Research Forum found that 70% of [officer] respondents report “using predictive policing analytics, including crime mapping software and data analysis techniques, to stop serial offenders and to develop informed crime prevention strategies for their force.” As of February 2019, at least 14 forces in the UK were using some type of predictive policing platform (BBC). There are several examples of initiatives that focus on either “predictive mapping” (identifying crime hotpots) or “individual risk assessment” (predicting if an individual is likely to commit an offence or be a victim of a crime).
It seems that the tide is changing, however.
In 2018, Kent Police ended a 5-year predictive policing project when analysis showed that they weren’t actually able to reduce crime in their catchment area.
Since the BBC article was published, two forces have stopped using any type of predictive technology, and others are likely to follow suit. Many have concluded that currently, the technology is too rife with risks of over-policing and not enough context or nuance is generated alongside current predictions.
Innovation comes in all shapes and sizes, from nothing into something and several things into one thing. Predictive policing may be rolled out in the UK again in some form, and that’s not necessarily a bad thing if the proper precautions are taken. Perhaps this period of working through its limitations and ensuring that forces truly understand what these are, will help innovators find appropriate uses for this powerful technology.
This technology can help us keep our communities safe as long as we are aware of – and take steps to mitigate – unintended consequences.