Artificial intelligence that scours crime data can predict crime locations with up to 90% accuracy in the coming week, but there are concerns about how systems like this can perpetuate bias.
A week in advance, artificial intelligence can predict the location and rate of crime in a city with up to 90% accuracy. Similar systems have been shown to perpetuate racist bias in policing, and the researchers who developed this AI claim that it can also be used to expose those biases.
Ishanu Chattopadhyay and his colleagues at the University of Chicago developed an AI model that analyzed historical crime data from Chicago, Illinois, from 2014 to the end of 2016, then predicted crime levels for the weeks following this training period.
With up to 90% accuracy, the model predicted the likelihood of certain crimes occurring across the city, which was divided into squares about 300 meters across, a week in advance. It was also trained and tested on data from seven other major US cities, with comparable results.
Previous attempts to use AI to predict crime have been controversial because they have the potential to perpetuate racial bias. In recent years, the Chicago Police Department has experimented with an algorithm that generates a list of people who are most likely to be involved in a shooting, either as a victim or a perpetrator.
The algorithm and the list were initially kept secret, but when the list was finally released, it revealed that 56% of Black men in the city aged 20 to 29 were on it.
Chattopadhyay acknowledges that the data used by his model will be biased as well, but claims that efforts have been made to reduce the effect of bias and that the AI does not identify suspects, only potential crime scenes. “This isn’t Minority Report,” he explains.
“Law enforcement resources are limited. So you want to make the most of it. “It would be great if you could predict where homicides would occur,” he says.
According to Chattopadhyay, AI predictions could be used more safely to inform policy at a high level rather than directly allocating police resources. He has made public the data and algorithm used in the study so that other researchers can investigate the findings.
The data was also used by the researchers to look for areas where human bias is affecting policing. They examined the number of arrests made as a result of crimes in Chicago neighborhoods with varying socioeconomic levels. This revealed that crimes in wealthier areas resulted in more arrests than in poorer areas, implying a bias in police response.
According to Lawrence Sherman of the Cambridge Centre for Evidence-Based Policing in the United Kingdom, he is concerned about the study’s inclusion of reactive and proactive policing data, or crimes that are recorded because people report them and crimes that are recorded because police go out looking for them. He claims that the latter type of data is highly susceptible to bias. “It could be reflective of police discrimination in certain areas,” he says.
Reference: Nature Human Behaviour, DOI: 10.1038/s41562-022-01372-0