Minority report: machine learning and crime prevention in China

The use of machine learning for crime prevention in China

In recent years, governments around the world increasingly rely on machine learning technologies to strengthen domestic crime prevention efforts [1]. While most governments including the U.S. employ machine learning for security purposes [2], this article focuses specifically on China, the fastest growing market for AI security technologies and home to ~200 million surveillance cameras [3].

China ranks 128th in the world in number of police officers per citizen [4]. To effectively govern the world’s largest population with a relatively small security force, China has turned to machine learning for help. The application of facial recognition technologies and big data analytics allows China to build an effective crime prevention system. This system functions in three important ways. First, facial recognition tools help police identify and capture suspected criminals who pose security risks to the public [3]. Recently, police in Zhengzhou, aided by AI-powered facial recognition glasses, detained heroin smugglers at the local train station [3]. Similar technology helped police in eastern China capture 25 runaway criminals at a local beer festival [5]. Second, big data tools allow police to analyze motion and behavior data to detect criminal activities. For example, traffic authorities in Jinan used gait analysis to identify jaywalking and track down violators [6]. The government of Chongqing analyzed activities of its residents to identify suspicious individuals linked to a local crime. Factors such as individuals’ visits to knife stores, interactions with victims, and facial expressions were factored into the analysis [7]. Third, machine learning allows the government to predict criminal intents and prevent crimes. Working with AI company CloudWalk, China is developing a ‘police cloud’ — a vast database of information on every citizen, including criminal and medical records, travel bookings, social media comments, and store visits [7]. The result is a big-data rating system that identifies highly suspicious groups based on background and behavior signals [8]. Though still in development, the tool has the potential to help police identity high-risk individuals and streamline crime monitoring and prevention efforts.

Notwithstanding encouraging results in China, machine learning as a crime prevention tool has several limitations. First, hardware and software shortcomings pose significant challenges. Surveillance cameras, for example, cannot scan more than 1000 faces at a time [3]. Furthermore, most facial recognition softwares struggle to achieve a high level of accuracy. A recent FBI study indicates the average facial recognition tool yields a large percentage of false positive results [7]. These shortcomings seriously limit the reliability and scalability of facial recognition programs. Second, data collection is time consuming and technically challenging. Vast majority of data files in China aren’t digitized, and reconciling information from disparate systems require extensive efforts [3]. Lastly, the regression models behind China’s crime prediction tools are crippled by systematic biases and currently unfit for large-scale deployment. Similar to BrightMind’s struggle with gender and regional biases in its employment prediction model, crime prediction tools in China are affected by ethnic and socioeconomic biases. A recent study by Human Rights Watch found the tool assigns disproportionately high risk ratings for Uyghurs and other minority groups [9]. Left unaddressed, these biases could lead to wrongful detentions and arrests.

Despite these constraints, China is increasing its commitment to machine learning. The government plans to invest $150 billion in machine learning programs by 2030, creating an AI-powered security system that is ‘omnipresent, fully networked, and fully controllable’ [7]. In addition, well-funded startups including Watrix and LLVision are working to improve facial and motion recognition technologies [6].

To fully unlock the potential of machine learning as a crime prevention tool, I believe the Chinese government needs to pursue several initiatives. First, the government should invest in data standardization. The effectiveness of a machine learning program is limited by the quality of its input data. The government needs to digitize records, reconcile mismatches, and connect disparate legacy systems to build a complete and accurate source of data. Second, the team working on building crime prediction algorithms must address biases in the model. The team must consider gender, ethnic, regional, and socioeconomic factors to surface and remove discriminations against certain groups [8]. Once the algorithm is deployed, the team should continuously update model logics to proactively combat biases and improve prediction accuracy. Lastly and most importantly, the government must establish appropriate legal frameworks and procedures to minimize human rights violations. Crime prediction tools should supplement, not replace, the thought process of jurists. China needs to update its due process laws and establish greater checks and balances to eliminate unfair treatments and protect the rights of suspects.

As China and the rest of the world continue to make great strides in crime prediction technologies, the question remains whether the technology can balance the public’s need for security and individuals’ need for privacy. Are there areas (i.e. types of crimes) in which machine learning can have a bigger impact? And how can we prevent willful abuse of this powerful tool?  

(795 words)

[1] Schneier, B. (2017). Ubiquitous Surveillance and Security. IEEE Society on Social Implications of Technology (SSIT). Accessed November 12, 2018, from http://technologyandsociety.org/ubiquitous-surveillance-and-security/

[2] Del Greco, K. (2017). Law Enforcement’s Use of Facial Recognition Technology. Federal Bureau of Investigation (FBI). Accessed November 11, 2018, from https://www.fbi.gov/news/testimony/law-enforcements-use-of-facial-recognition-technology

[3] Mozur, P. (2018). Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras. The New York Times. Accessed November 12, 2018, from https://www.nytimes.com/2018/07/08/business/china-surveillance-technology.html

[4] United Nations Secretary-General  (2010). State of Crime and Criminal Justice Worldwide. Twelfth United Nations Congress on Crime Prevention and Criminal Justice. Accessed November 11, 2018, from https://web.archive.org/web/20140211174006/http://www.unodc.org/documents/commissions/CCPCJ_session19/ACONF213_3eV1050608.pdf

[5] Jiang, S. (2018). You Can Run, But Can’t Hide From AI in China. CNN. Accessed November 11, 2018, from https://www.cnn.com/2018/05/23/asia/china-artificial-intelligence-criminals-intl/index.html

[6] The Associated Press. (2018). Chinese ‘Gait Recognition’ Tech IDs People by How They Walk. The New York Times. Accessed November 11, 2018, from https://www.nytimes.com/aponline/2018/11/05/technology/ap-as-tec-china-gait-recognition.html

[7] Denyer, S. (2018). Beijing Bets On Facial Recognition In a Big Drive For Total Surveillance. The Washington Post. Accessed November 11, 2018, from https://www.washingtonpost.com/news/world/wp/2018/01/07/feature/in-china-facial-recognition-is-sharp-end-of-a-drive-for-total-surveillance/?utm_term=.e641c7d19b3a

[8] Yang, Y. (2017). China Seeks Glimpse of Citizens’ Future With Crime-predicting AI. Financial Times. Accessed November 11, 2018, from https://www.ft.com/content/5ec7093c-6e06-11e7-b9c7-15af748b60d0

[9] HRW. (2018). China: Big Data Fuels Crackdown in Minority Region. Human Rights Watch. Accessed November 11, 2018, from https://www.hrw.org/news/2018/02/26/china-big-data-fuels-crackdown-minority-region

Previous:

Google Duplex: Does it Pass the Turing Test?

Next:

Activision-Blizzard (ATVI): Using Machine Learning for videogames development and community moderation

Student comments on Minority report: machine learning and crime prevention in China

  1. This technology obviously raises a number of ethical questions, including that of privacy rights. A future where we’re being watched at all times is terrifying. How do you consider the tradeoff between crime prevention and personal privacy?

  2. While definitely not on the same scale nor on the same level of scrutiny, the U.S. has seen similar applications in recent years as well. A startup co-founded by a friend of mine, Mark43, develops a cloud-based system that helps police officers, first aid responders, and others manage their vast amounts of data and use machine learning to gather operational insights. (https://www.fastcompany.com/40426359/the-big-business-of-police-tech) Your first question will largely depend on the jurisprudence of each individual country. I can see a future where countries with strong privacy protection laws such as the U.S. do not go as far with this technology as countries with centralized governmental power such as China. As for preventing abuse, this is where I am quite wary of implementing such technologies. Those who end up in charge of this system need to be held to the highest standards of integrity and secrecy, which is far easier said than done.

    P.S. The movie “Eagle Eye” is an action film premised on the FBI’s secret machine learning AI going rogue and deciding that the assassination of the entire government is the only way to protect the U.S. constitution. Fascinating plot, decent movie.

  3. It’s fascinating, and frankly a bit horrifying to see how this model can be so biased against minority groups. In my post, I highlighted a similar program in Chicago that uses machine learning to predict where violent crimes may occur. The data used in that case can easily be biased as it is based primarily on reports from police officers who may be biased in how they write reports or how they current police in general. I would have hoped that collecting data that is not inherently biased (e.g., a surveillance camera) would remove some bias from the predictive algorithm, but unfortunately that does not appear to be the case. I’m wondering if that is because the data that initially created the algorithm had a disproportionate number of minorities who had committed crimes, or if there is another confounding factor here. I am curious if the Chinese police force has a way to manually adjust for these biases, or if they are simply following the algorithm. Given the fact that most algorithms continue to learn over time, a small bias against minorities initially, will compound over time.

  4. Almost wrote my paper on this – would love to talk to you about it Tom Thompson, whoever you are!

Leave a comment