Your browser version is outdated. We recommend that you update your browser to the latest version.

Can artificial intelligence and predictive policing make us safer?

Line of Defence Magazine, Summer 2017/18

Associate Professor Ryan Ko: AI " invaluable in the fight against terrorism."Associate Professor Ryan Ko: AI " invaluable in the fight against terrorism."


We speak to Associate Professor Ryan Ko, Director of the New Zealand Institute for Security and Crime Science at Waikato University, and Associate Professor Michael Townsley, Head of Griffith University’s School of Criminology and Criminal Justice, about predicting crime.


Predictive policing methodologies and Artificial Intelligence technologies are widely reported as the future of crime fighting and prevention, and as formidable weapons in the foiling of terrorist plots. To what extent do AI and predictive policing provide the answer, what are their limitations, and what are the risks?


Predictive policing – it’s not Minority Report

The methodologies of predictive policing are increasingly being studied and deployed by law enforcement agencies in New Zealand and across the Tasman. “Predictive policing has appeal,” Michael Townsley told Line of Defence, “because it uses information routinely collected and promises to highlight locations or people of interest to police.”

According to Professor Townsley, predictive policing in essence involves collating a list of people or places and ranking the list according to their future risk of committing or hosting crime. Resources, patrols or crime prevention, can then be allocated according to the forecasts.

“Studies have shown that the 10 percent most active offenders account for about 50 percent of all crime, and the top 10 percent of places hosts 60 percent of crime. If you can predict the places or people most likely to commit or host crime, you have a good chance at impacting it,” he said.

“Australasian services should definitely explore and consider predictive policing approaches, but not blindly of course. What works in one jurisdiction may not be replicable in another. Careful consideration about localising the forecasting approach as well as the tactical options needs to occur.”


Artificial intelligence

Artificial Intelligence is gaining momentum internationally in policing. It’s being used by social media companies to identify hate speech, extremism and terrorists; in motor vehicles to prevent ramming attacks; in cyber security to detect cyber threats, and by law enforcement as a technological enabler for predictive policing.

According to Ryan Ko, AI is rapidly growing in terms of its adoption in policing. The goal, he told Line of Defence, is to increase productivity by making laborious policing tasks more efficient.

Working closely with NZ Police, Professor Ko’s New Zealand Institute for Security and Crime Science (NZISCS) is leading the way in New Zealand in the development and application of AI technologies in crime fighting.

One of his teams is researching with the Waikato District Police on patrol optimisation using optimisation and forecasting algorithms, “allowing us to plan optimised area coverage (and usually consequently higher safety) given limited resources and crime patterns.”

“NZ Police is also working with NZISCS on #CrimeOnline, an artificial intelligence project identifying cyber-enabled crime on social media platforms,” he said. #CrimeOnline has reduced person-hours from hours and days to seconds, allowing quicker identification of risks to society.


Enjoying this article? Consider a subscription to the print edition of Line of Defence.


Ethical pitfalls

Predictive approaches and AI are not without their ethical pitfalls. The NSW Police’s Suspect Targeting Management Plan (STMP), which utilises predictive policing methodologies, recently came under fire for targeting young and indigenous people, and using predictors to justify the stopping and searching of youth.

“The major pitfall of predictive policing is what happens after the forecast,” said Professor Townsley. “If the tactics selected reinforce the ranking, then the system becomes a self-fulfilling prophecy.”

“To illustrate, suppose I predict next month's biggest spenders for a major retailer. If the retailer then offers discounts to my predicted list, there's a good chance they will spend more than if they didn't receive the offer. My prediction looks good, but really it was the actions taken post forecast that deserves the credit. In other words, I really only predicted those who act on discounts.”

It’s a feedback loop that’s difficult to avoid when targeting people, he says. Increased attention can result in higher likelihood of detection, which will further reinforce the ranking.

As such, avoiding criticisms of discriminatory profiling requires predictive policing models that avoid targeting characteristics that are static and outside the control of individuals. “In technical terms, you need a counterfactual -- what would have happened had we not acted?”

Professor Ko believes that the best analysis and insights will still require human input coupled with strong ethical principles.

“Like any tools, AI must never be used to target or marginalise any groups,” he commented. “This is the mindset of criminology, which focuses on the likelihood of a person committing crime. Rather, there is a need for crime science, which focuses on crime events and how we can prevent them through an interdisciplinary approach combining geography, psychology, strategy and computer science.”

“AI needs ethical and technical guidelines designed around them. When the NZISCS was established in March 2017, our first priority was to engage and work together with iwi, Deputy Police Commissioner (Maori) and communities.”


Is AI actually smarter?

Do AI technologies provide a more scientific vehicle for forecasting than leaving it to humans? Both Ko and Townsley concur that whatever its significant benefits, AI has its limitations.

According to Professor Ko, there are many scenarios that may also require 'eye-witness', evidence-informed, intuition or experience-informed methods. “For strategic objectives, humans must always be a part of the cycle,” he said, “as the current AI technologies are still application specific (eg. playing chess, machine learning).”

“AI, in essence, is pattern recognition at scale,” explained Professor Townsley. “That is, it can possess a magnitude of recall and analysis far beyond what the human mind possesses (at a point in time), but it may not understand the nuances behind the inputs it receives.”


Best hope against terrorism?

Predictive policing and AI are seen as the future of combatting the threat of terrorist attacks given their potential ability to predict random attacks before they happen. The relative rarity – and resultant lack of data – of attacks themselves, poses challenges.

According to Professor Townsley, the best opportunity for prediction is when networks are in the planning stage, not the attacks themselves. “Large-scale attacks are mercifully rare, but modelling rare events is problematic – a small number of cases can lead to spurious correlations. Larger numbers of events tend to wash out spurious associations.”

“However, consider if a group is planning an attack. There are a number prerequisites conditions: equipment, personnel, site intelligence. These sorts of patterns are likely to emit a different signal from a group planning a surprise party, or an overseas holiday. The opportunity is there would be plenty of control groups on which to train an AI algorithm to discriminate between overt or covert planning.”

Professor Ko agrees that the only way to apprehend terrorists before they execute their plans is to know what they are planning in advance.

“One of the chief obstacles in this battle is not only the acquisition of the necessary intelligence from the various types of surveillance that our government agencies employ but the ability to process all of this data and recognise patterns and relationships,” he said.

“Computer programs that have the ability to not only collect and sort millions of bits of random data, but to recognize how they relate to each other, are invaluable in the fight against terrorism.”


Waikato University’s interdisciplinary NZISCS and its flagship Master of Security and Crime Science are the first-of-their-kind in Asia Pacific. For more information, visit:


Back to Homeland Security

Share on Social Media

Follow us


Contact us

Phone: 022 366 3691


© 2015. Defsec Media Limited. All Rights Reserved.






Line of Defence     

Fire NZ     

NZ Security