Artificial intelligence has been weaponized in China. That should be a wake-up call for the world

China is using AI to persecute its Uighur minority

Kyle Matthews & Alexandrine Royer · for CBC News · Posted: May 21, 2019 4:00 AM ET | Last Updated: May 21
https://www.cbc.ca/news/opinion/ai-china-1.5140612?__vfz=medium%3Dsharebar&fbclid=IwAR0B7CRvlpfvtJDrwog-PdC26xvZ-19YlcLQLIYuabcCunLWZ4RFioGwMeA

The Chinese government is using AI-powered facial recognition systems to monitor and target members of the Uighurs, a persecuted Muslim minority in China. (Ng Han Guan/Associated Press)

1535 comments

At this year’s World Economic Forum in Davos, the philanthropist George Soros caught everyone’s attention when he warned that the Chinese government’s use of artificial intelligence (AI) presents an “unprecedented danger” to its citizens and to all open societies.

His reading of the situation was prophetic. Last month, The New York Times confirmed Soros’ fears when the newspaper revealed that the Chinese government is using AI-powered facial recognition systems to monitor and target members of the Uighurs, a persecuted Muslim minority in China. Human Rights Watch, in its recently released report titled “China’s Algorithms of Oppression,” provide additional evidence of Beijing’s use of new technologies to curtail the rights and liberties of the Uighurs.

In the province of Xinjiang, where the majority of Turkic minorities reside, surveillance cameras equipped with face scans are omnipresent on street corners, mosques and schools. Commuters travelling between towns must go through security checkpoints where police, with the help of a mobile app, can access information ranging from their religious practices, political affiliation, use of social media platforms and even blood type. In this ecosystem of intense social monitoring, even legal routine behaviour, such as exiting through a backdoor, can be treated as suspect and serve as grounds for dubious arrests.

China has already faced international condemnation for its large-scale arbitrary detention of the Uighurs. The Global Center for the Responsibility to Protect and the Asia-Pacific Center for the Responsibility to Protect, in their report titled The Persecution of the Uighurs and Potential Crimes Against Humanity in China, have signaled that approximately one million Uighurs and other Turkic Muslim minorities are placed against their will in “re-education” facilities.

The report cautions: “If urgent measures are not implemented to end the current state of systematic persecution, there is a clear and imminent danger of further crimes against humanity occurring.”

Social credit system

China’s willingness to use AI to control its wider population and stamp out disorder is already well reflected in its nascent social credit system. Developed in concert by private entities and the state, AI-powered algorithms collect data on an individual’s financial and social behaviours to calculate their social score and determine if they pose a threat to the Communist Party of China.

Citizens with low creditworthiness are publicly shamed as their names and faces appear on billboard size displays. However, the use of AI-based facial recognition systems to target minorities pushes this systematic repression one-step further. This is the world’s first case of a government using AI to carry out what many human rights experts consider mass atrocity crimes.

China’s use of AI to persecute the Uighurs demonstrates the need to establish a global human rights framework for this emerging technology. (Seth Wenig/Associated Press)

Though The New York Times reported that only Chinese companies developed the facial recognition software, Western tech giants are also catering to Beijing’s authoritarian needs.

In April, Microsoft was accused of being complicit in the design and research of AI facial recognition systems used by the Chinese government for state surveillance. Microsoft Research Asia and the Chinese military-run University of Defensive Technology co-authored three papers on AI and facial analysis last year.

The company defended its controversial partnership by stating that its employees’ research “is guided by our principles, fully complies with US and local laws”. Given the harmful potential of these technologies, Western companies should be more wary of such collaborations.

The truth is China’s use of AI to persecute the Uighurs is a global wake-up call for the international community and demonstrates the need to establish a global human rights framework for this emerging technology.

Privacy rights

Beyond its use by repressive regimes, AI can directly interfere with human rights in democratic and open societies. The infinite collection of personal data by AI systems for micro-ad targeting limits the rights of privacy. AI-enabled online content monitoring impedes freedom of expression and opinion, as access to and the sharing of information by users is controlled in opaque and inscrutable ways.

Vast AI-powered disinformation campaigns — from troll bots to deepfakes (altered video clips) — threaten societies’ access to accurate information, can disrupt elections and erode social cohesion.

An equally frightening scenario is the use of AI in conflict situations. Human Rights Watch has warned that AI could be used in the future to target certain populations in war zones through deploying lethal autonomous weapon systems, commonly known as killer robots.

Many important voices are beginning to wake up to AI’s threat to human rights, particularly in the absence of regulation and oversight. In his report to the United Nations General Assembly on the Promotion and Protection of the Right to Freedom of Opinion and Expression, UN Special Rapporteur David Kaye stated that “a great global challenge confronts all those who promote human rights and the rule of law: how can States, companies and civil society ensure that artificial intelligence technologies reinforce and respect, rather than undermine and imperil, human rights?”

With China aggressively lobbying for Huawei to build the next generation cellular network in Western countries, including Canada, policy makers should pause and reflect on a legitimate question. That is, how will China’s use of AI powered surveillance technologies be applied to Huawei’s 5G network?

While states through history have used new technologies against civilians, AI has the power to augment the scale, scope and proliferation of monitoring, surveillance and repression. AI’s ability to collect inestimable amounts of personal data per minute at relatively low costs gives state agencies the capacity to conduct levels of intrusive surveillance that pure manpower could never achieve.

What was once in the realm of Orwellian fiction is now being realized in China. As summarizedby MIT researcher Jonathan Frankle, “this is an urgent crisis we are slowly sleepwalking our way into.”

Clearly greater collaboration between states will be necessary to prevent AI-enabled human rights abuses and work to ensure authoritarian regimes do not export their technology and practices to other countries. The international community must set in place a human-rights framework that will protect citizens from AI’s most dangerous and lethal applications.

While individual countries are free too act in their own jurisdictions, only global cooperation will establish the norms and rules that are needed to protect citizens the world over from the growing nefarious use of AI.

Leave a Reply

Your email address will not be published. Required fields are marked *