Tuesday, April 26, 2016

IS PREDICTIVE POLICING THE LAW ENFORCEMENT TACTIC OF THE FUTURE?

A Johns Hopkins professor says it helps reduce crime and police profiling. An attorney at Electronic Frontier Foundation says it will lead to more bias

The Wall Street Journal
April 24, 2016

As big data transforms industries ranging from retailing to health care, it’s also becoming a more important tool for police departments, which are turning to data and analysis in an effort to boost their effectiveness.

Known as predictive policing, the practice involves analyzing data on the time, location and nature of past crimes, along with things such as geography and the weather, to gain insight into where and when future crime is most likely to occur and try to deter it before it happens.

Jennifer Bachner, director of the master of science in government analytics program at Johns Hopkins University, says giving police the ability to make data-driven decisions will help reduce biases that result in unfair discrimination, resulting in better relations between police and the communities they serve. Jennifer Lynch, senior staff attorney at the Electronic Frontier Foundation, says predictive policing is flawed and will only serve to focus more law-enforcement surveillance on communities that are already overpoliced.

YES: Police Can Be in the Right Place at the Right Time
By Jennifer Bachner

In an era of tight budgets, police departments across the country are being asked to do more with less. They must protect the public, but often have to do it with limited personnel, equipment and training resources.

To address this problem, law-enforcement agencies increasingly are turning to data and analytics to improve their ability to fight crime without substantial increases in operating costs. Known as predictive policing, these technologies and techniques empower police officers to take a more proactive approach to both preventing crime and solving open cases.

Predictive policing involves crunching data on past crimes, along with information such as the weather, the time of day and the presence of escape routes, to forecast where and when future crime is most likely to occur. In cities such as Santa Cruz, Calif., officers have access to maps outlining “hot spots,” or geographic areas most vulnerable to crime at a future point in time, and they are encouraged to use the information along with their knowledge of the community to decide where to allocate the most resources on a given shift.

The theory isn’t complicated—being in the right place at the right time deters crime—and the approach has proved effective, particularly in places such as Santa Cruz, where the population is dispersed over a large area.

Some in law enforcement say predictive policing is particularly helpful when it comes to identifying and halting repeat criminals.

The Baltimore County Police Department says it used predictive methods to halt a string of convenience-store robberies. Police had information about the locations of the robberies and a suspected model of car used by the elusive offender, but no obvious next target. By plotting the robbed locations on a map and employing an iterative algorithm, police identified a suspected point of origin. Police then analyzed the streets that would likely have been used to reach the crime locations and detected one specific street that the offender had likely used frequently (and would probably use again) to travel to crime scenes. Officers staked out that street, rather than patrolling numerous convenience stores, and were able to apprehend the suspect.

Some critics say that because not all crime is reported, predictive models based on past crime data might miss future crimes that don’t fit historical patterns. But today’s predictive models aren’t based solely on past crime data—they also take into account some of the same things potential criminals do when planning crimes, such as geographic information.

To achieve positive results with predictive policing, some upfront costs are required: Law-enforcement agencies must make an initial investment not only in software, but also in training officers to understand the proper scope and limitations of data-driven policing.

The use of data, like the use of any tool, leaves openings for misuse, but police departments can take steps to protect civil liberties. There is a big difference, for example, between predicting where crime is most likely to occur and developing lists of potential future offenders without probable cause, a practice that certainly raises serious ethical and legal concerns.

Policy makers also must grapple with the proper scope of data collection, retention and use and be able to explain to the community how data is being used to enhance public safety. That is why departments that adopt predictive-policing programs must at the same time re-emphasize their commitment to community policing. Officers won’t achieve substantial reductions in crime by holing up in patrol cars, generating real-time hot-spot maps. Effective policing still requires that officers build trust with the communities they serve.

With proper implementation, monitoring and transparency, the trend toward evidence-based policing should ultimately enhance the relationship between communities and police officers. That’s because data-driven decision making is a step away from decisions based on biases that can result in unfair discrimination. Predictive models grounded in relevant data, including everything from past crime to the weather, limit the influence of prejudice or profiling by officers.

The stakes are high, but predictive policing offers an opportunity to make significant advances toward a safer and more just society.

NO: It Is Ineffective and Will Increase Police Bias
By Jennifer Lynch

Proponents of predictive policing claim it will lead to unbiased policing and reduced crime. But in reality, it will only further focus police surveillance on communities that already are overpoliced and could threaten our constitutional protections and fundamental human rights.

There is little data to back up claims by makers of predictive-policing systems that their products actually work. In fact, one of the few independent studies available—by Rand Corp.—found that predicting technology used in Shreveport, La., was ineffective at reducing crime.

This is likely due to the way predictive systems work. All predictive-policing systems analyze historical crime data to predict where crimes are likely to occur in the future. Some also rely on weather data, consumer financial data, property records and even information about family members or gathered from social-media posts to predict who is likely to be involved in future crimes. But these systems aren’t clairvoyant. Because algorithm-training models must rely on data about known past crimes, they can only predict future incidents that resemble the nature, time and location of prior crimes.

That means predictive-policing systems will miss at least 50% of crime because we only have data on about half of the crime that occurs in the U.S., according to government estimates; the other half is never reported. The result is that systems will miss crimes that don’t fit patterns from the past, and law-enforcement agencies will devote more resources to looking for crimes they would already have found the old-fashioned way and less on crimes that require longer and deeper investigations.

Predictive-policing systems also are vulnerable to a feedback-loop problem: As data on arrests and criminal activity reported as a result of predictive policing are fed back into the system, they will justify initial crime prediction and ensure police will continue to look for crime in the same places as they always have.

Putting aside concerns about effectiveness, using past crime as a model for predicting future crime has a deeper problem: It will perpetuate police bias. All of us commit crime, yet only some crimes are selected for enforcement. This is due partly to departmental priorities but also to well-documented racially biased policing.

Police bias informs crime data fed into predictive-policing systems, reinforcing existing inequalities in which neighborhoods and racial groups are most targeted by police. This makes decisions to focus on certain areas or groups appear impartial because the algorithm itself can’t be racist. It also allows intentional racism to be disguised as an unintentional byproduct of the system.

Predictive-policing systems that rely on information from social-media posts to predict whether a person may be more likely to engage in crime or escalate the dangerousness of a situation also raise free-speech issues. People limit what they say when they know they are being watched, so models that rely on people’s speech have the very real potential to chill free expression.

Ultimately, we are fundamentally uncomfortable with the notion that an algorithm can predict what we will do before we even decide to do it—and tell the police about it. A system that takes incomplete, unreliable and biased data and spits out a conclusion that a particular person will commit a crime—or that crime will occur in a particular community—doesn’t give people the opportunity to choose a different path. Instead, by increasing police focus on certain people and areas, the prediction that someone will commit crime or that some communities will have more crime almost becomes a self-fulfilling prophecy, because when the number of police is increased in a given area, it almost always results in more arrests.

Rather than relying on predictive models to find crime, analytics could be used to address underlying societal factors that can lead to criminal behavior. A pilot program in Los Angeles, for example, is using predictive models to find the most at-risk children in the child-welfare system and provide them with services designed to help them stay out of the juvenile-justice system. With appropriate resources, these kinds of programs could do more to change the cycle of crime than using yet another technology to put people behind bars.

No comments: