Article Preview
TopIntroduction
It is unsurprising that data sets are used to anticipate crime. In the modern world, data drives decisions continuously. Prediction using algorithms is not new. For instance, the insurance industry has used predictors for decades in determining risk (Boodhun & Jayabalan, 2018). The banking industry employed algorithms to determine loan eligibility (Shie, Chen, & Liu, 2012). Political parties have collected and assessed data toward identifying, targeting, and influencing voters (van der Voort, Klievink, Arnaboldi, & Meijer, 2019). The marketing industry continually developed, refined and shaped messages to potential consumers (Du, Rong, Michalska, Wang, & Zhang, 2019). Amazon, Facebook, and Google all used machine learning techniques to analyze data derived from their customers (Hewage, Halgamuge, Syed, & Ekici, 2018). Each of these industries and corporations developed data sets to identify and target their respective consumers. What has changed over time is the advancement and refinement of the technology used to analyze the data. Artificial intelligence (AI) spawned Deep Learning (DL) involving artificial neural networks, modeled from the human brain, by applying a set of algorithms whereby the ‘machine’ will reach a solution to a specific problem (Marr, 2016a). For instance, Facebook’s DeepFace is a DL application that accurately recognized faces at a 97% success rate, as compared to the human success rate of 96% (Marr, 2016b). Even at a 97% success rate, it means that very accurate application will still get it wrong 3% of the time.
From a law enforcement and community safety perspective, the foundation for crime prediction is the concept that people behave predictably (to some degree) and future behavior may be both anticipated and predicted (Hayes, 2015). If Hayes’s assertion was true, can human behavior data be analyzed to examine whether behavior patterns may be anticipated? As a result, could more efficient intervention to deter crime and maintain societal order be crafted while not crossing the line with civil or human rights violations?
PREDICTIVE ALGORITHMS
Cambridge Dictionary (n.d.) defined algorithms as “a set of mathematical instructions or rules that, especially if given to a computer, will help to calculate an answer to a problem.” Predictive algorithms relied on artificial intelligence (AI) being applied to machine learning (Marr, 2016a), and all three (algorithms, machine learning, and artificial intelligence) were based on mathematical principles, such as probability theory and inferential statistics. Rigano (2019, para. 4), indicated that, “Conceptually, AI is the ability of a machine to perceive and respond to its environment independently and perform tasks that would typically require human intelligence and decision-making processes, but without direct human intervention.” While people should tend to think of mathematics as dealing in absolute truths and as an objective science, O’Neil (2016, 2017), a mathematician and data scientist, insisted that algorithms were nothing more than opinions embedded in code. An algorithm was a computer-coded instruction, written by human programmers, that allowed the discerning of patterns within massive amounts of historical data. Afterward, by assuming found patterns were fixed facts, outcomes of future predictions were provided for single locations and/or individual people (Ferguson, 2017a). In the case of crime prediction, ‘hot spots’ were flagged. Regarding recidivism risk, a single score was assigned to individuals identifying them within the justice system as having a high risk to recidivate. These predictive algorithms were used, over the past half dozen years, as supportive tools for decision-making within all components of the criminal justice system (courts, corrections, and law enforcement). Within policing, predictive algorithms have become a “multi-million dollar business” (Ferguson, 2017a, p. 1132).