Predictive Policing on Trial

The latest battlefield between law enforcement agencies and civil liberties groups centers on a practice known as predictive policing: a computer-driven method of flagging places and persons that are likely to be the scenes and the perpetrators of crimes.

Police departments around the country have been eager consumers of software programs with names like CrimeScan and PredPol, which promise much greater accuracy in predicting crime. Police departments say they deliver on that promise. Civil liberties advocates say they may also be delivering racially and ethnically biased forecasts built into the data feed into the computers.

The arguments for and against are currently being presented in court, as some of the biggest police departments — including New York, Chicago and Los Angeles — face lawsuits aimed at prying loose the closely-held secrets of the predictive policing programs.

The plaintiffs suspect that minority members are being arrested on algorithm-based hunches that they are on the way to committing a crime, when in most cases they are, in reality, just on their way to buy groceries at the corner bodega, albeit in a high-crime neighborhood.

As Jay Stanley, a senior policy analyst for the American Civil Liberties Union, said: “Everybody is trying to find out how it works, if it’s fair. This is all pretty new. This is all experimental. And there are reasons to think this is discriminatory in many ways.”

Critics are inclined to construe predictive policing as a euphemism for racial profiling, a new and diabolical tool in the kit of the Orwellian police state. But first, some perspective:

The first point to be made is that predictive policing is nothing new. It has always existed. Police departments routinely assign personnel according to the rate of crime in a given area over a period of time. What is new are the algorithms, which are designed to make those assignment decisions more scientific, more precise. A refinement, not a revolution.

The question of fairness is likely overdone. In a certain sense, discrimination is unavoidable. Police must discriminate against neighborhoods and individuals based on their record of crime. It’s no use pretending that all neighborhoods are law-abiding until proven crime-ridden. We know very well which are the high-crime areas; even the haughtiest civil libertarians would never willingly set foot in certain neighborhoods. In most cases, they make sure to live a safe distance from the places that turn up on the predictive policing lists.

The greater cause for concern has to do with predicting the behavior of individuals. Arguably, that is a wide-open portal for harrassment and repression.

Not all the predictive technologies go in for this individualized data. But Chicago’s “Strategic Subject List” of folks most likely to be involved in future shootings — either as a shooter or a victim — does.

In a related issue, an investigation carried out in 2016 by ProPublica concluded that an AI (artificial intelligence) system used by judges to predict the likelihood of a convicted criminal to break the law again was flawed by bias against minorities. Northpointe, the company that created the algorithm, denied the charge. But it lends support to the fear that predictive policing may be similarly flawed.

The potential for misuse has been illustrated elsewhere, too. Chinese authorities in Xinjiang have reportedly been gathering information on the Muslim Uighur population, the better to crack down on them.

“For the first time, we are able to demonstrate that the Chinese government’s use of big data and predictive policing not only blatantly violates privacy rights, but also enables officials to arbitrarily detain people,” Maya Wang, senior China researcher at Human Rights Watch (HRW), was quoted as saying in the Japan Times.

However, police departments are resisting demands to open up their files. In fact, they say they can’t, because of privacy laws and safety concerns, and because some data is proprietary.

It would however be worthwhile that the police find a way — with the assistance of the courts — to make at least partial disclosure. Given the climate of public opinion in which they are assumed to be racist until proven otherwise, it behooves them to demonstrate that the methods being used are non-discriminatory. If they can do so, it will help restore some of the public confidence among minorities that has been lost in recent years.

Although no comprehensive studies have been made yet, the claims for predictive policing are such that it’s understandable that the police don’t want to give it up.

In November 2011, Time Magazine hailed it as one of the 50 best inventions of the year. PredPol, which focuses on places, not people, claims that its program has double the predictive power of ordinary human analysis.

The technology, which has been adopted enthusiastically by numerous police departments in the U.S., promises a more effective response to crime, which would benefit everyone, especially the law-abiding citizens in minority areas where crime rates are highest.

Some options have been suggested to allay the uneasiness about predictive policing. For example, the Center for Democracy & Technology has created a “digital decisions” tool that helps filter out bias by posing questions aimed at uncovering unfair assumptions and skewed data.

Ultimately, the computers themselves could be recruited to help resolve the problem. The recent debut of IBM’s Project Debater suggests the possibility that computers could analyze the algorithms used in predictive policing and then take sides in the debate.

If the Debater argues against it, it would provide the civil liberties advocates with expert testimony for their case.

If it comes out for it, then we’d have to ask if it’s possible for a computer to have a conflict of interest.

To Read The Full Story

Are you already a subscriber?
Click to log in!