Crime Hotspot Prediction Using Big Data in China

Crime Hotspot Prediction Using Big Data in China

Chunfa Xu, Xiaoyang Hu, Anqi Yang, Yimin Zhang, Cailing Zhang, Yufei Xia, Yanan Cao
DOI: 10.4018/978-1-7998-0357-7.ch019
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter proves that utilizing big data and machine learning to predict crime is feasible in China. Researchers introduce five new machine learning algorithms into the field of crime prediction and compare them with four methods widely used in previous research. Using a weekly dataset in 213 street-level cells of Shanghai from April 2017 to March 2018, the researchers find new methods work better in predicting whether a specific cell will be a crime hotspot in next week. Five among nine methods can predict crime with more than 90 percent accuracy. These findings provide a scientific reference for urban safety protection. The research adds some significant evidence to a theoretical literature emphasizing that big data can predict crime.
Chapter Preview
Top

Introduction

Crime now has been recognized as one of the most important social problems in the world, it is always related to such words like violence, unhappiness and insecurity. What is more, it will affect both the life quality of people and the economic development level of society, including public security, children development and adult socio-economic status and so on. According to some researches, crime tends to slower economic growth at both the national level (Mehlum et al., 2005) and the local level, such as cities and metropolitan areas (Cullen & Levitt, 2009).

Unfortunately, with the rapid development of the world, growing number of countries are facing increasingly serious crime problems. Since the early 1990s, the crime rate around the world has risen by an average of 5% a year. For example, according to BBC report, crime in England and Wales jumped by 13% in 2017, with a total of more than 5 million offences compared with 4.6 million last year, making it the fastest rise in crime in Britain for a decade. Beyond all that, the forms of crime are becoming more and more diverse, such as cyber-crime, due to the development of technologies. All of these, make it very hard to solve crime problems. Effective means of crime prevention are indispensable.

With the rapid development of network technology and the increasing speed of data transmission, people’s daily life has been gradually separated into two levels: reality and network (Mcafee et al., 2012). At the network level, we have entered the “cloud” living environment. All kinds of basic data and behavioral data in real life will be uploaded and recorded to this “cloud” instantly. The huge database provides us with the possibility of various data analysis. With the advent of smartphones and wearable computing devices, every change in people’s behavior, location, and even physical data becomes data that can be recorded and analyzed, an era of mass production, sharing, and application of data is beginning. With the continuous development of economy, the imbalance of national quality and cultural level induces the emergence of crimes. At the present stage, the total number of crimes in China presents a trend of increasing year by year, and the crime rate keeps rising (Kumar et al., 2018). Therefore, using big data to prevent crime is a necessary means for public security organizations to investigate crimes in the future.

As early as 2011, the crime prediction system has been put into operation in a large group of major cities in the United States and the United Kingdom, which achieved remarkable results (Xuemei, 2015). Time magazine even listed the crime prediction system based on big data as one of the top 50 inventions in 2012. According to the research report written by the Rand corporation in 2013, the prediction analysis of American crime intelligence is divided into four categories, namely, the method of predicting crimes, the method of predicting criminals, the method of predicting the identity of criminals and the method of predicting the victims of crimes (Andrey Bogomolov et al., 2014). Specific approaches are mainly based on low-complexity and small-scale historical crime data, and use Crime Mapping, Hot-spot Policing, Comp Stat and other analytical tools to conduct forecasts (Dumbill, 2013). The probability prediction and alarm prompt is obtained by computer program operation and deduction, through the analysis of historical crime data, alarm data, economic conditions and other small range of external data, combined with the crime map to determine the hot spots of crime (Inbaek et al., 2017).

Crime prediction is of great practical significance to the whole society. Scientific crime prediction methods and technologies can help the public security organs to effectively utilize the known data on criminal activities and their trends to predict the possible future criminal behaviors. And it is based on the predicted results to formulate the action deployment, in order to maximize the effectiveness of limited resources (Daichao et al., 2014).

Key Terms in this Chapter

SVC: Scalable Video Coding (SVC) is based on h. 264, which is extended on the syntax and toolset to support code streams with hierarchical features. H.264svc is appendix G of H. 264 standard and the new profile of H. 264.

Logistic Regression: Logistic regression analysis is mainly used in epidemiology. The most common case is to explore the risk factors of a certain disease and predict the probability of the occurrence of a certain disease according to the risk factors.

Ada Boost: An iterative algorithm whose core idea is to train different classifiers (weak classifiers) according to the same training set, and then assemble these weak classifiers to form a stronger final classifier (strong classifier).

Bagging: By constructing a series of predictive functions and combining them into a predictive function in a certain way. Bagging requires a classification method of “instability” (instability is a classification method in which small changes in the index data set can result in significant changes in the classification results).

Decision Tree: A decision analysis method that, on the basis of the known probability of various situations, calculates the probability of the expected value of the net present value being greater than or equal to zero by constructing the decision tree, evaluates the project risk, and determines its feasibility.

K Neighbors: The idea of this method is: if most of the k most similar samples in the feature space belong to a certain category, then the sample also belongs to this category.

Extra Trees: The algorithm is very similar to the random forest, but compared with random forest, all the samples used in this algorithm are only randomly selected for their features. Because the splitting is random, the results obtained are better than those obtained by random forest to some extent.

Gradient Boosting: An algorithm that selects the direction of gradient descent during iteration to ensure the best results.

Complete Chapter List

Search this Book:
Reset