Technology is advancing—generating employment, creating prospects never conceived of before, and ushering humanity into an era of a more connected, more secure world—yet the tale of the Uighur Muslim minority in China narrates an account of an ominous reign of mass surveillance, powered by AI (artificial intelligence).
What happens when a tool designed to safeguard the society and aid law enforcement puts a crosshair on ethnic minorities? The Uighur Muslims of the Xinjiang region of China would know—unfortunately not many are around to make a statement.
Life Under the Lens: A Look Inside China’s AI-assisted Surveillance Empire
The People’s Republic of China has emerged as the world leader in facial and biometric recognition technology that promises a safer, faster, and healthier world tomorrow—but at what cost?
In the December of 2020, the Guardian reported that e-commerce giant Alibaba Group ‘offered clients facial recognition’ to identify Uighur Muslims, an ethnic minority group in China. The Wall Street Journal reported the United States government has reason to believe that China is ‘committing genocide’ against Uighur Muslims.
American video surveillance research firm IPVM’s report published in mid-December revealed that Alibaba’s proprietary Cloud Shield moderation service contains filters that “detects and recognizes text, pictures, videos, and voices containing pornography, politics, violent terrorism, advertisements, and spam, and provides verification” and is adept in performing tasks such as the determination of a subject’s ethnicity—including determining if the subject is of Uighur descent.
Former U.S. Secretary of State, Mike Pompeo said in a statement that the State Department possessed ‘exhaustive documentation’ of China’s human rights violations against not only the Uighurs but also Kazakhs, Kyrgyz, and other minority ethnic groups in the Xinjiang region.
In the hands of an authoritarian regime, artificial intelligence being used as a tool to subjugate humanity stands against the very principle of its conception—to aid humankind and build a better, more harmonious world.
ALSO READ: When Trends and Technology Collide: A Definitive Guide For The Year
The AI equation is not as simple as it may seem. One might ask—how can a pile of data be a detriment to peace? The algorithm is not conscious, so how can it have malicious intent?
Machine learning and artificial intelligence algorithms are known to inherit biases that humans exhibit, and when left unchecked and abused, can exacerbate the state of society and the order of the world.
A 2019 study by the National Institute of Standards and Technology underlined a key flaw in AI-powered facial recognition algorithms, concluding the systems involved in the study falsely identified Asian and African-American cases 10-100 times more when compared with Caucasian subjects, an alarmingly large number of erroneous deductions.
The phenomenon, referred to as AI bias or algorithm bias is notoriously commonplace; occurring as the algorithm imbibes systemic prejudices made through erroneous assumptions that are fed to its database while training a machine-learning algorithm.
A tool is only as good as the hands that wield it.
When we hear of the way an international superpower is harnessing AI to develop a hyper-surveillance system to monitor and persecute some of its people and subject the rest to live a life under the lens, we are compelled to reconsider its place in the world.
The Uighur detainment and mass subjugation in China has provided testimony to the fact that as long as technology exists, so will those who seek to weaponize it.