The role of AI in China’s crackdown on Uighurs

https://www.ft.com/content/e47b33ce-1add-11ea-97df-cc63de1d73f4?fbclid=IwAR0GCXawMCjMj7JzzkYDskedngWgLdl5IFvodOAMehS3RCEO_jZ_ijSv-G4

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.comT&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/e47b33ce-1add-11ea-97df-cc63de1d73f4?fbclid=IwAR0GCXawMCjMj7JzzkYDskedngWgLdl5IFvodOAMehS3RCEO_jZ_ijSv-G4

In the past month, a trove of leaked internal documents has revealed the Chinese government’s plans in the border region of Xinjiang, where about 1.8 million members of the country’s Muslim minority have been detained. The leaks have further exposed the centrepiece of China’s surveillance state in Xinjiang, a vast database with information on the background and behaviours of millions of residents. In the wake of the leaks, IJOP, an “integrated joint operations platform”, has been described as a form of “predictive” policing that uses “big data” and artificial intelligence. Yet while AI does aid the inputting of data — through tools such as facial-recognition cameras — there is no evidence so far that it is used by the IJOP to form decisions about individuals. In associating China’s repression in Xinjiang with sophisticated, AI-driven policing models, we may be assuming too much. The IJOP’s technology is at root driven by political objectives that are blunt and indiscriminate. As Edward Schwarck, a PhD student researching Chinese public security at the University of Oxford, says: “Calling it intelligence-led or predictive policing draws attention away from the fact that what is happening in Xinjiang is not about policing at all, but a form of social engineering.” China has high ambitions for the use of big data in national security and set up a series of labs, starting in Xinjiang’s Urumqi, to research the topic. But officers lament that their systems are a mess. Crimes such as political dissent are so loosely interpreted that one may never be able to predict them precisely. The stated intention for the government’s security clampdown is to prevent terrorism and separatism. For the IJOP to incorporate an AI model that spots terrorists, humans would have to decide what counts as a terrorist, feed examples of terrorists into the IJOP’s AI model, and then ask the platform to find matches of people with similar characteristics. What counts as a terrorist in Xinjiang? This is the central question, and it is ultimately decided by humans, not just machines. The government boasts there have been no terrorist attacks in Xinjiang since 2016. So for this AI model to even work in theory, there must be humans deciding that certain people — who have not yet committed terrorist crimes — are terrorists or future terrorists. Under Beijing’s vague criminal categories, “terrorism” or “separatism” can include not just plots to harm people but simply holding or attending private religious gatherings. Recommended FT Magazine Fear and oppression in Xinjiang: China’s war on Uighur culture We don’t yet know the full details of what leads the IJOP to flag someone as requiring police attention. But, from various leaks, we know that a vast range of traits can lead to someone being tagged as suspicious, including having family living abroad or having certain foreign apps like WhatsApp installed on one’s phone. More broadly, the behaviours that the police are punishing are Uighur Muslim customs, from practising the religion to celebrating the language and culture. At the very least, China is trying to make Uighurs more like the Han Chinese majority. The fact there are 1.8 million people in mass detention camps in Xinjiang should itself suggest the Chinese state is not interested in precision or predictive policing. For AI models to be meaningful, the objective has to be defined to include a cost for inaccuracy to avoid false positives. If you don’t care about accuracy, you don’t need AI to achieve your goals. The most devastating feature of the IJOP is not how the data is processed, but how it is gathered. Input sources range from government-assigned groups of 10 households asked to inform on each other, to police extracting data from smartphones, to facial-recognition surveillance cameras. The effect of surveillance on this scale is to make people feel that they are constantly being watched, and to fear they might be doing the wrong thing — even when “wrong” is not well defined. When I was last in Xinjiang, a Uighur man told me he was no longer able to pray in the mosques, since many had closed. He could only pray at home, and even then, he added, “they” would know. I didn’t ask why he believed he was being watched at home, fearing the consequences for him of speaking to a journalist. But for the government to eradicate Uighur customs, it doesn’t matter whether people are actually being watched, so long as they believe they are. Yuan Yang is the FT’s China tech correspondent in Beijing Follow @FTMag on Twitter to find out about our latest stories first. Listen and subscribe to Culture Call, a transatlantic conversation from the FT, at ft.com/culture-call or on Apple Podcasts

Leave a Reply

Your email address will not be published. Required fields are marked *