Skip to content Skip to footer

From Fallujah to the San Fernando Valley, Police Use Analytics to Target “High-Crime” Areas

Police are now crunching crime statistics to divine when and where future crimes are most likely to occur, a method seen by some as an expansion of a militarization of police work.

(Photo: Alex Thompson / Flickr)

In an article in the November 2009 issue of Police Chief Magazine, the Los Angeles Police Department’s Chief of Detectives Charlie Beck asked his fellow law enforcement leaders, “What can we learn from Walmart and Amazon about fighting crime in a recession?”

The answer Beck offered was “predictive policing,” a new high-tech method by which police crunch crime statistics and other data with algorithms to divine when and where future crimes are most likely to occur. Beck wrote that it was an example of police following in the footsteps of tech-savvy corporate America: “Specific tactics and techniques to execute the predictive-policing model can be found in business analytics. E-commerce and marketing have learned to use advanced analytics in support of business intelligence methods designed to anticipate, predict and effectively leverage emerging trends, patterns and consumer behavior.”

Since then, predictive policing has become a media darling. Hundreds of stories have been written on the use of computer models to predict and prevent crime. Police departments from Seattle to London have been profiled using computer prediction and mapping tools to apprehend suspects and saturate high-crime “hot spots” with officers.

However, the media coverage to date has missed the military origins of predictive policing. Far from being a transfer of technology and Big Data advances spawned by corporate America, the predictive policing software used by many cities actually originates in US military-funded research to track insurgents and predict civilian casualties in war zones such as Iraq and Afghanistan. Critics say the little-known origins of predictive policing reveal the biases of the technology and methods, and worry that it represents a further militarization of America’s police.

UCLA’s Military Research Labs

In May of 2006, years before the Los Angeles Police Department began experimenting with predictive policing technology, UCLA professors Andrea Bertozzi and Jeffery Brantingham obtained a US Army Research Office grant to apply statistical modeling to various military problems. On Professor Bertozzi’s UCLA website, the grant is titled “Spatio-temporal event pattern recognition,” a somewhat benign-sounding project. To the US Army, it is titled “spatio-temporal nonlinear filtering with applications to information assurance and counterterrorism.”

Other grants to study counterterrorism and insurgency followed. Over the next several years, UCLA professor Jeffrey Brantingham and his graduate students, and a postdoc researcher named George Mohler, reported back to the Army about how their research findings might be applied to warfare. They modeled patterns of “civilian deaths in Iraq” and “terrorist and insurgent activities,” publishing their findings in several academic journals.

“One of the goals is to develop a probabilistic framework for detecting and tracking covert activities of hostile agents,” the UCLA researchers wrote about their projects. “This framework will include an algorithmic toolkit for detecting and tracking hostile activities, methodology for analyzing properties of those algorithms, and theoretical models that will address the general question of trackability.”

In a presentation last year to the Air Force Research Laboratory, another military agency funding the UCLA professors, Brantingham talked about the “hybrid threats” of “radicalization and adversarial psychology,” which leads to “adversarial activity patterns” and ultimately to “hostile events.” Slides accompanying Brantingham’s presentation showed Afghan men and other Arab or Muslim men with their faces wrapped in scarves, gathered around a cache of automatic rifles. Brantingham’s presentation also included images of Latino youths in Los Angeles, labeled “gang members.”

In a 2009 report to the US military, the UCLA researchers made direct comparisons between enemy combatants, called “insurgents” or “terrorists,” and populations in the United States they defined as “gang members.” UCLA’s Brantingham and Bertozzi explained that the algorithms they developed for the US military were going to be applied in Los Angeles, California:

“Working with LAPD, we are developing novel algorithms for detecting changes in status of high-crime neighborhoods using a combination of statistical and spatial models developed in this program. Development and implementation of quantitative change-point detection and particle-filtering techniques will provide the Army with a plethora of new data-intensive predictive algorithms for dealing with insurgents and terrorists abroad.”

LAPD Rolls Out Predictive Policing

In 2010, LAPD received a $3 million grant from the National Institute of Justice to develop “intelligence-led policing” practices, including crime-prediction methods. In conjunction with the Los Angeles Police Foundation, a 501(c)3 nonprofit that channels private money to LAPD, (53 percent of which goes to technology purchases), LAPD entered into a research agreement with UCLA professor Brantingham to test their predictive algorithms on property crimes in LAPD’s Foothill Division in the San Fernando Valley.

By 2012, officers patrolling the 46-square-mile Foothill Division were receiving daily printouts of maps with 500-by-500-foot square boxes identified by Brantingham’s algorithms as the 20 zones where crime was supposedly most likely to occur. The predictions were based on six years worth of geo-located crime reports.

LAPD believes their predictive policing program has made a real impact on crime by helping officers do their jobs and reducing crime rates in areas where the algorithm is used.

“This program is not the panacea,” said Captain Jorge Rodriguez, one of the commanders of the Foothill Division. “It’s like a big wide net we cast out into the ocean; there’s going to be some seepage.” However, he said, the historical crime data used to generate the predictions provides more than enough justification to allocate officers to sit on that location – a luxury the LAPD can afford by dint of the department’s budget and manpower.

The one-year predictive policing pilot in Foothill Division ended in January 2013, with an evaluation of the program’s effectiveness. According to Rodriguez, the analysis showed that Foothill Division led the LAPD’s other patrol areas in crime reductions for every week in 2012. Since then, LAPD has restarted the predictive policing initiative in Foothill Division and expanded it to two more patrol sectors.

Militarized, Racialized Police Technologies

While LAPD and other police departments have praised the results of their initial experimentation with predictive policing, the method’s military origins and undetermined efficacy have come under fire from some activists and academics. Many worry that it is simply an extension of existing police practices that unjustly target people of color, albeit this time under the guise of objective technology.

Whitney Richards-Calathes, a doctoral candidate at the City University of New York who is studying predictive policing, said that predictive policing in Los Angeles has its conceptual origins in the controversial “broken windows” theory championed by former LAPD chief William Bratton. Under broken-windows policing, officers are encouraged to aggressively police small crimes and even simply examples of disorder. “Predictive policing works on this notion of broken windows. It is technology that predicts these minor offenses, like burglary or auto theft,” Richards-Calathes said. “There is no evidence that it is an effective technology for murders, and of course, it is not a technology marketed to hit much more serious issues such as white-collar crime.”

Richards-Calathes points out that a large part of determining policing strategy is determining the definition of a crime and which crimes deserve attention. “The most over-policed crimes are offenses that are believed to be committed by people of color,” she said.

Richards-Calathes also noted that UCLA professor Brantingham has monetized his research on predictive policing in the form of PredPol Inc., a for-profit company that is marketing a proprietary crime-prediction computer program to municipal police departments as a solution to reduced staffing during a time of fiscal constraint.

In 2012, Brantingham and UCLA postdoc George Mohler, who had by then joined the faculty of Santa Clara University in Silicon Valley, and a team of businessmen from Santa Cruz, California, incorporated Predpol. Over the next two years, they signed up cities including Seattle, Richmond, California, and Columbia, South Carolina, for their cloud-accessed, crime-prediction software. The contracts average over $100,000 each.

According to Richards-Calathes, the testing of experimental technologies on low-income communities of color reflects a deepening of the prison-industrial complex. “City, state, and federal agencies as well as corporations make money, and the technologies are implemented in immigrant, poor, communities of color because these are places that are seen as low in political clout and these are places branded as ‘dangerous’ and ‘criminal,’ labels that make it seem as if residents are deserving of such surveillance, monitoring and policing.”

Hamid Khan, an organizer with the Stop LAPD Spying Coalition, has closely studied LAPD’s surveillance and information-gathering techniques. He views predictive policing as “a feedback loop of injustice” that has dubious efficacy, since the algorithms used by LAPD rely on reported crime data in low-income neighborhoods.

“There’s a clear bias that is inherent because it can only predict the information that is being uploaded,” said Khan. “In other words, it’s garbage in, garbage out.” According to Khan, the reliance on historical crime data skews police presence toward low-income neighborhoods that are already saturated with cops.

“The way the model works is previous crime data is put into these computer algorithms that can predict future crime,” Khan said. “This is low-level survival crime, so the same neighborhoods, poor neighborhoods, predominantly nonwhite communities, people of color.”

In addition to the racialized aspect of crime prediction, Khan is also disturbed by the increasing reliance of police on business analytics and algorithms. Setting aside the element of profit for private companies that make such software, Khan warns that predictive policing has the potential to invert core principles of “justice.”

“In essence, the principle that has been held for the longest time – of innocent until proven guilty – has now been turned onto its head because now we’re all guilty until proven innocent.” Khan said.

We’re not going to stand for it. Are you?

You don’t bury your head in the sand. You know as well as we do what we’re facing as a country, as a people, and as a global community. Here at Truthout, we’re gearing up to meet these threats head on, but we need your support to do it: We must raise $50,000 to ensure we can keep publishing independent journalism that doesn’t shy away from difficult — and often dangerous — topics.

We can do this vital work because unlike most media, our journalism is free from government or corporate influence and censorship. But this is only sustainable if we have your support. If you like what you’re reading or just value what we do, will you take a few seconds to contribute to our work?